00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2338 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3603 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.138 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.138 > git --version # timeout=10 00:00:00.168 > git --version # 'git version 2.39.2' 00:00:00.168 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.987 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.997 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.008 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:05.008 > git config core.sparsecheckout # timeout=10 00:00:05.020 > git read-tree -mu HEAD # timeout=10 00:00:05.034 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:05.055 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:05.055 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:05.166 [Pipeline] Start of Pipeline 00:00:05.178 [Pipeline] library 00:00:05.180 Loading library shm_lib@master 00:00:05.180 Library shm_lib@master is cached. Copying from home. 00:00:05.195 [Pipeline] node 00:00:05.210 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.212 [Pipeline] { 00:00:05.222 [Pipeline] catchError 00:00:05.224 [Pipeline] { 00:00:05.237 [Pipeline] wrap 00:00:05.245 [Pipeline] { 00:00:05.251 [Pipeline] stage 00:00:05.252 [Pipeline] { (Prologue) 00:00:05.446 [Pipeline] sh 00:00:05.729 + logger -p user.info -t JENKINS-CI 00:00:05.745 [Pipeline] echo 00:00:05.747 Node: WFP21 00:00:05.752 [Pipeline] sh 00:00:06.043 [Pipeline] setCustomBuildProperty 00:00:06.053 [Pipeline] echo 00:00:06.054 Cleanup processes 00:00:06.059 [Pipeline] sh 00:00:06.341 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.341 1989459 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.353 [Pipeline] sh 00:00:06.634 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.634 ++ grep -v 'sudo pgrep' 00:00:06.634 ++ awk '{print $1}' 00:00:06.634 + sudo kill -9 00:00:06.634 + true 00:00:06.647 [Pipeline] cleanWs 00:00:06.656 [WS-CLEANUP] Deleting project workspace... 00:00:06.656 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.662 [WS-CLEANUP] done 00:00:06.664 [Pipeline] setCustomBuildProperty 00:00:06.675 [Pipeline] sh 00:00:06.954 + sudo git config --global --replace-all safe.directory '*' 00:00:07.067 [Pipeline] httpRequest 00:00:07.746 [Pipeline] echo 00:00:07.748 Sorcerer 10.211.164.101 is alive 00:00:07.756 [Pipeline] retry 00:00:07.758 [Pipeline] { 00:00:07.778 [Pipeline] httpRequest 00:00:07.783 HttpMethod: GET 00:00:07.783 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.783 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.795 Response Code: HTTP/1.1 200 OK 00:00:07.796 Success: Status code 200 is in the accepted range: 200,404 00:00:07.796 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.464 [Pipeline] } 00:00:11.481 [Pipeline] // retry 00:00:11.496 [Pipeline] sh 00:00:11.790 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.803 [Pipeline] httpRequest 00:00:12.184 [Pipeline] echo 00:00:12.186 Sorcerer 10.211.164.101 is alive 00:00:12.197 [Pipeline] retry 00:00:12.199 [Pipeline] { 00:00:12.214 [Pipeline] httpRequest 00:00:12.219 HttpMethod: GET 00:00:12.219 URL: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:12.220 Sending request to url: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:12.246 Response Code: HTTP/1.1 200 OK 00:00:12.246 Success: Status code 200 is in the accepted range: 200,404 00:00:12.247 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:02:04.546 [Pipeline] } 00:02:04.563 [Pipeline] // retry 00:02:04.571 [Pipeline] sh 00:02:04.857 + tar --no-same-owner -xf spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:02:07.406 [Pipeline] sh 00:02:07.691 + git -C spdk log --oneline -n5 00:02:07.691 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:02:07.691 12fc2abf1 test: Remove autopackage.sh 00:02:07.691 83ba90867 fio/bdev: fix typo in README 00:02:07.691 45379ed84 module/compress: Cleanup vol data, when claim fails 00:02:07.691 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:02:07.709 [Pipeline] withCredentials 00:02:07.720 > git --version # timeout=10 00:02:07.733 > git --version # 'git version 2.39.2' 00:02:07.750 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:07.752 [Pipeline] { 00:02:07.762 [Pipeline] retry 00:02:07.764 [Pipeline] { 00:02:07.779 [Pipeline] sh 00:02:08.063 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:08.075 [Pipeline] } 00:02:08.092 [Pipeline] // retry 00:02:08.097 [Pipeline] } 00:02:08.115 [Pipeline] // withCredentials 00:02:08.124 [Pipeline] httpRequest 00:02:08.530 [Pipeline] echo 00:02:08.531 Sorcerer 10.211.164.101 is alive 00:02:08.542 [Pipeline] retry 00:02:08.544 [Pipeline] { 00:02:08.559 [Pipeline] httpRequest 00:02:08.563 HttpMethod: GET 00:02:08.564 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:08.564 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:08.570 Response Code: HTTP/1.1 200 OK 00:02:08.571 Success: Status code 200 is in the accepted range: 200,404 00:02:08.571 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:31.418 [Pipeline] } 00:02:31.435 [Pipeline] // retry 00:02:31.443 [Pipeline] sh 00:02:31.728 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:33.129 [Pipeline] sh 00:02:33.414 + git -C dpdk log --oneline -n5 00:02:33.414 caf0f5d395 version: 22.11.4 00:02:33.414 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:33.414 dc9c799c7d vhost: fix missing spinlock unlock 00:02:33.414 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:33.414 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:33.423 [Pipeline] } 00:02:33.437 [Pipeline] // stage 00:02:33.446 [Pipeline] stage 00:02:33.448 [Pipeline] { (Prepare) 00:02:33.469 [Pipeline] writeFile 00:02:33.486 [Pipeline] sh 00:02:33.771 + logger -p user.info -t JENKINS-CI 00:02:33.784 [Pipeline] sh 00:02:34.068 + logger -p user.info -t JENKINS-CI 00:02:34.081 [Pipeline] sh 00:02:34.365 + cat autorun-spdk.conf 00:02:34.365 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.365 SPDK_TEST_NVMF=1 00:02:34.365 SPDK_TEST_NVME_CLI=1 00:02:34.365 SPDK_TEST_NVMF_NICS=mlx5 00:02:34.365 SPDK_RUN_UBSAN=1 00:02:34.365 NET_TYPE=phy 00:02:34.365 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:34.365 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:34.372 RUN_NIGHTLY=1 00:02:34.377 [Pipeline] readFile 00:02:34.403 [Pipeline] withEnv 00:02:34.405 [Pipeline] { 00:02:34.418 [Pipeline] sh 00:02:34.704 + set -ex 00:02:34.704 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:34.704 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:34.704 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.704 ++ SPDK_TEST_NVMF=1 00:02:34.704 ++ SPDK_TEST_NVME_CLI=1 00:02:34.704 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:34.704 ++ SPDK_RUN_UBSAN=1 00:02:34.704 ++ NET_TYPE=phy 00:02:34.704 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:34.704 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:34.704 ++ RUN_NIGHTLY=1 00:02:34.704 + case $SPDK_TEST_NVMF_NICS in 00:02:34.704 + DRIVERS=mlx5_ib 00:02:34.704 + [[ -n mlx5_ib ]] 00:02:34.704 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:34.704 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:41.275 rmmod: ERROR: Module irdma is not currently loaded 00:02:41.275 rmmod: ERROR: Module i40iw is not currently loaded 00:02:41.275 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:41.275 + true 00:02:41.275 + for D in $DRIVERS 00:02:41.275 + sudo modprobe mlx5_ib 00:02:41.275 + exit 0 00:02:41.285 [Pipeline] } 00:02:41.300 [Pipeline] // withEnv 00:02:41.305 [Pipeline] } 00:02:41.319 [Pipeline] // stage 00:02:41.329 [Pipeline] catchError 00:02:41.331 [Pipeline] { 00:02:41.344 [Pipeline] timeout 00:02:41.345 Timeout set to expire in 1 hr 0 min 00:02:41.346 [Pipeline] { 00:02:41.360 [Pipeline] stage 00:02:41.363 [Pipeline] { (Tests) 00:02:41.377 [Pipeline] sh 00:02:41.663 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:41.663 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:41.663 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:41.663 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:41.663 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:41.663 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:41.663 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:41.663 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:41.663 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:41.663 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:41.663 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:41.663 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:41.663 + source /etc/os-release 00:02:41.663 ++ NAME='Fedora Linux' 00:02:41.663 ++ VERSION='39 (Cloud Edition)' 00:02:41.663 ++ ID=fedora 00:02:41.663 ++ VERSION_ID=39 00:02:41.663 ++ VERSION_CODENAME= 00:02:41.663 ++ PLATFORM_ID=platform:f39 00:02:41.663 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.663 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.663 ++ LOGO=fedora-logo-icon 00:02:41.663 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.663 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.663 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.663 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.663 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.663 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.663 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.663 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.663 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.663 ++ SUPPORT_END=2024-11-12 00:02:41.663 ++ VARIANT='Cloud Edition' 00:02:41.663 ++ VARIANT_ID=cloud 00:02:41.663 + uname -a 00:02:41.663 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.663 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:44.955 Hugepages 00:02:44.955 node hugesize free / total 00:02:44.955 node0 1048576kB 0 / 0 00:02:44.955 node0 2048kB 0 / 0 00:02:44.955 node1 1048576kB 0 / 0 00:02:44.955 node1 2048kB 0 / 0 00:02:44.955 00:02:44.955 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.955 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:44.955 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:44.955 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:44.955 + rm -f /tmp/spdk-ld-path 00:02:44.955 + source autorun-spdk.conf 00:02:44.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.955 ++ SPDK_TEST_NVMF=1 00:02:44.955 ++ SPDK_TEST_NVME_CLI=1 00:02:44.955 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:44.955 ++ SPDK_RUN_UBSAN=1 00:02:44.955 ++ NET_TYPE=phy 00:02:44.955 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:44.955 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:44.955 ++ RUN_NIGHTLY=1 00:02:44.955 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:44.955 + [[ -n '' ]] 00:02:44.955 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:44.955 + for M in /var/spdk/build-*-manifest.txt 00:02:44.955 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:44.955 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:44.955 + for M in /var/spdk/build-*-manifest.txt 00:02:44.955 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:44.955 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:44.955 + for M in /var/spdk/build-*-manifest.txt 00:02:44.955 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:44.955 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:44.955 ++ uname 00:02:44.955 + [[ Linux == \L\i\n\u\x ]] 00:02:44.955 + sudo dmesg -T 00:02:44.955 + sudo dmesg --clear 00:02:44.955 + dmesg_pid=1990966 00:02:44.955 + [[ Fedora Linux == FreeBSD ]] 00:02:44.955 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.955 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.955 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:44.955 + [[ -x /usr/src/fio-static/fio ]] 00:02:44.955 + export FIO_BIN=/usr/src/fio-static/fio 00:02:44.955 + FIO_BIN=/usr/src/fio-static/fio 00:02:44.955 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:44.955 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:44.955 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:44.955 + sudo dmesg -Tw 00:02:44.955 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.955 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.955 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:44.955 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:44.955 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:44.955 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:44.955 15:21:22 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:44.955 15:21:22 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:02:44.955 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:44.956 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:44.956 15:21:22 -- nvmf-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:02:44.956 15:21:22 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:44.956 15:21:22 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:45.215 15:21:22 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:45.215 15:21:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:45.215 15:21:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:45.215 15:21:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:45.215 15:21:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.215 15:21:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.215 15:21:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.215 15:21:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.215 15:21:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.215 15:21:22 -- paths/export.sh@5 -- $ export PATH 00:02:45.215 15:21:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.215 15:21:22 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:45.215 15:21:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:45.215 15:21:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730643682.XXXXXX 00:02:45.215 15:21:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730643682.OzLh6B 00:02:45.215 15:21:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:45.215 15:21:22 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:45.215 15:21:22 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:45.215 15:21:22 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:02:45.215 15:21:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:45.215 15:21:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:45.215 15:21:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:45.215 15:21:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:45.215 15:21:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.215 15:21:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:02:45.215 15:21:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:45.215 15:21:22 -- pm/common@17 -- $ local monitor 00:02:45.215 15:21:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.215 15:21:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.215 15:21:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.215 15:21:22 -- pm/common@21 -- $ date +%s 00:02:45.215 15:21:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.215 15:21:22 -- pm/common@21 -- $ date +%s 00:02:45.215 15:21:22 -- pm/common@25 -- $ sleep 1 00:02:45.215 15:21:22 -- pm/common@21 -- $ date +%s 00:02:45.215 15:21:22 -- pm/common@21 -- $ date +%s 00:02:45.215 15:21:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730643682 00:02:45.215 15:21:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730643682 00:02:45.215 15:21:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730643682 00:02:45.215 15:21:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730643682 00:02:45.215 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730643682_collect-cpu-load.pm.log 00:02:45.215 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730643682_collect-cpu-temp.pm.log 00:02:45.215 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730643682_collect-vmstat.pm.log 00:02:45.215 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730643682_collect-bmc-pm.bmc.pm.log 00:02:46.152 15:21:23 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:46.152 15:21:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.152 15:21:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.152 15:21:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:46.152 15:21:23 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.152 Sun Nov 3 02:21:23 PM UTC 2024 00:02:46.152 15:21:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.152 v25.01-pre-124-gfa3ab7384 00:02:46.152 15:21:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:46.152 15:21:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.152 15:21:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.152 15:21:23 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:46.152 15:21:23 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:46.152 15:21:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.152 ************************************ 00:02:46.152 START TEST ubsan 00:02:46.152 ************************************ 00:02:46.152 15:21:23 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:46.152 using ubsan 00:02:46.152 00:02:46.152 real 0m0.001s 00:02:46.152 user 0m0.000s 00:02:46.152 sys 0m0.000s 00:02:46.152 15:21:23 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:46.152 15:21:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.152 ************************************ 00:02:46.152 END TEST ubsan 00:02:46.152 ************************************ 00:02:46.412 15:21:23 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:46.412 15:21:23 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:46.412 15:21:23 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:46.412 15:21:23 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:02:46.412 15:21:23 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:46.412 15:21:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.412 ************************************ 00:02:46.412 START TEST build_native_dpdk 00:02:46.412 ************************************ 00:02:46.412 15:21:23 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:46.412 15:21:23 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:02:46.412 caf0f5d395 version: 22.11.4 00:02:46.412 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:46.412 dc9c799c7d vhost: fix missing spinlock unlock 00:02:46.412 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:46.412 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:46.412 patching file config/rte_config.h 00:02:46.412 Hunk #1 succeeded at 60 (offset 1 line). 00:02:46.412 15:21:24 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.412 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:46.413 patching file lib/pcapng/rte_pcapng.c 00:02:46.413 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.413 15:21:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:46.413 15:21:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:51.690 The Meson build system 00:02:51.690 Version: 1.5.0 00:02:51.690 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:51.690 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:02:51.690 Build type: native build 00:02:51.690 Program cat found: YES (/usr/bin/cat) 00:02:51.690 Project name: DPDK 00:02:51.690 Project version: 22.11.4 00:02:51.690 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:51.690 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:51.690 Host machine cpu family: x86_64 00:02:51.690 Host machine cpu: x86_64 00:02:51.690 Message: ## Building in Developer Mode ## 00:02:51.690 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.690 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:51.690 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.690 Program objdump found: YES (/usr/bin/objdump) 00:02:51.690 Program python3 found: YES (/usr/bin/python3) 00:02:51.690 Program cat found: YES (/usr/bin/cat) 00:02:51.690 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:51.690 Checking for size of "void *" : 8 00:02:51.690 Checking for size of "void *" : 8 (cached) 00:02:51.690 Library m found: YES 00:02:51.690 Library numa found: YES 00:02:51.690 Has header "numaif.h" : YES 00:02:51.690 Library fdt found: NO 00:02:51.690 Library execinfo found: NO 00:02:51.690 Has header "execinfo.h" : YES 00:02:51.690 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:51.690 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.690 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.690 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.690 Run-time dependency openssl found: YES 3.1.1 00:02:51.690 Run-time dependency libpcap found: YES 1.10.4 00:02:51.690 Has header "pcap.h" with dependency libpcap: YES 00:02:51.690 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.690 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.690 Compiler for C supports arguments -Wformat: YES 00:02:51.690 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.690 Compiler for C supports arguments -Wformat-security: NO 00:02:51.690 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.690 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.690 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.690 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.690 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.690 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.690 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.690 Compiler for C supports arguments -Wundef: YES 00:02:51.690 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.690 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.690 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.690 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.690 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.690 Compiler for C supports arguments -mavx512f: YES 00:02:51.690 Checking if "AVX512 checking" compiles: YES 00:02:51.690 Fetching value of define "__SSE4_2__" : 1 00:02:51.690 Fetching value of define "__AES__" : 1 00:02:51.690 Fetching value of define "__AVX__" : 1 00:02:51.690 Fetching value of define "__AVX2__" : 1 00:02:51.690 Fetching value of define "__AVX512BW__" : 1 00:02:51.690 Fetching value of define "__AVX512CD__" : 1 00:02:51.690 Fetching value of define "__AVX512DQ__" : 1 00:02:51.690 Fetching value of define "__AVX512F__" : 1 00:02:51.690 Fetching value of define "__AVX512VL__" : 1 00:02:51.690 Fetching value of define "__PCLMUL__" : 1 00:02:51.690 Fetching value of define "__RDRND__" : 1 00:02:51.690 Fetching value of define "__RDSEED__" : 1 00:02:51.690 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:51.690 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.690 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.690 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.690 Checking for function "getentropy" : YES 00:02:51.690 Message: lib/eal: Defining dependency "eal" 00:02:51.690 Message: lib/ring: Defining dependency "ring" 00:02:51.690 Message: lib/rcu: Defining dependency "rcu" 00:02:51.690 Message: lib/mempool: Defining dependency "mempool" 00:02:51.690 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.690 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.690 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:51.690 Compiler for C supports arguments -mpclmul: YES 00:02:51.690 Compiler for C supports arguments -maes: YES 00:02:51.690 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.690 Compiler for C supports arguments -mavx512bw: YES 00:02:51.690 Compiler for C supports arguments -mavx512dq: YES 00:02:51.690 Compiler for C supports arguments -mavx512vl: YES 00:02:51.690 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.690 Compiler for C supports arguments -mavx2: YES 00:02:51.690 Compiler for C supports arguments -mavx: YES 00:02:51.690 Message: lib/net: Defining dependency "net" 00:02:51.690 Message: lib/meter: Defining dependency "meter" 00:02:51.690 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.690 Message: lib/pci: Defining dependency "pci" 00:02:51.690 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.690 Message: lib/metrics: Defining dependency "metrics" 00:02:51.690 Message: lib/hash: Defining dependency "hash" 00:02:51.690 Message: lib/timer: Defining dependency "timer" 00:02:51.690 Fetching value of define "__AVX2__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:51.690 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.690 Message: lib/acl: Defining dependency "acl" 00:02:51.690 Message: lib/bbdev: Defining dependency "bbdev" 00:02:51.690 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:51.690 Run-time dependency libelf found: YES 0.191 00:02:51.690 Message: lib/bpf: Defining dependency "bpf" 00:02:51.690 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:51.690 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.691 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.691 Message: lib/distributor: Defining dependency "distributor" 00:02:51.691 Message: lib/efd: Defining dependency "efd" 00:02:51.691 Message: lib/eventdev: Defining dependency "eventdev" 00:02:51.691 Message: lib/gpudev: Defining dependency "gpudev" 00:02:51.691 Message: lib/gro: Defining dependency "gro" 00:02:51.691 Message: lib/gso: Defining dependency "gso" 00:02:51.691 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:51.691 Message: lib/jobstats: Defining dependency "jobstats" 00:02:51.691 Message: lib/latencystats: Defining dependency "latencystats" 00:02:51.691 Message: lib/lpm: Defining dependency "lpm" 00:02:51.691 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.691 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.691 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:51.691 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:51.691 Message: lib/member: Defining dependency "member" 00:02:51.691 Message: lib/pcapng: Defining dependency "pcapng" 00:02:51.691 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.691 Message: lib/power: Defining dependency "power" 00:02:51.691 Message: lib/rawdev: Defining dependency "rawdev" 00:02:51.691 Message: lib/regexdev: Defining dependency "regexdev" 00:02:51.691 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.691 Message: lib/rib: Defining dependency "rib" 00:02:51.691 Message: lib/reorder: Defining dependency "reorder" 00:02:51.691 Message: lib/sched: Defining dependency "sched" 00:02:51.691 Message: lib/security: Defining dependency "security" 00:02:51.691 Message: lib/stack: Defining dependency "stack" 00:02:51.691 Has header "linux/userfaultfd.h" : YES 00:02:51.691 Message: lib/vhost: Defining dependency "vhost" 00:02:51.691 Message: lib/ipsec: Defining dependency "ipsec" 00:02:51.691 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.691 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.691 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.691 Message: lib/fib: Defining dependency "fib" 00:02:51.691 Message: lib/port: Defining dependency "port" 00:02:51.691 Message: lib/pdump: Defining dependency "pdump" 00:02:51.691 Message: lib/table: Defining dependency "table" 00:02:51.691 Message: lib/pipeline: Defining dependency "pipeline" 00:02:51.691 Message: lib/graph: Defining dependency "graph" 00:02:51.691 Message: lib/node: Defining dependency "node" 00:02:51.691 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:51.691 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:51.691 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:51.691 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:51.691 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:51.691 Compiler for C supports arguments -Wno-unused-value: YES 00:02:51.691 Compiler for C supports arguments -Wno-format: YES 00:02:51.691 Compiler for C supports arguments -Wno-format-security: YES 00:02:51.691 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:52.267 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:52.267 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:52.267 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:52.267 Fetching value of define "__AVX2__" : 1 (cached) 00:02:52.267 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.267 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.267 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:52.267 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:52.267 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:52.267 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.267 Configuring doxy-api.conf using configuration 00:02:52.267 Program sphinx-build found: NO 00:02:52.267 Configuring rte_build_config.h using configuration 00:02:52.267 Message: 00:02:52.267 ================= 00:02:52.267 Applications Enabled 00:02:52.267 ================= 00:02:52.267 00:02:52.267 apps: 00:02:52.267 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:52.267 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:52.267 test-security-perf, 00:02:52.267 00:02:52.267 Message: 00:02:52.267 ================= 00:02:52.267 Libraries Enabled 00:02:52.267 ================= 00:02:52.267 00:02:52.267 libs: 00:02:52.267 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:52.267 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:52.267 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:52.267 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:52.267 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:52.267 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:52.267 table, pipeline, graph, node, 00:02:52.267 00:02:52.267 Message: 00:02:52.267 =============== 00:02:52.267 Drivers Enabled 00:02:52.267 =============== 00:02:52.267 00:02:52.267 common: 00:02:52.267 00:02:52.267 bus: 00:02:52.267 pci, vdev, 00:02:52.267 mempool: 00:02:52.267 ring, 00:02:52.267 dma: 00:02:52.267 00:02:52.267 net: 00:02:52.267 i40e, 00:02:52.267 raw: 00:02:52.267 00:02:52.267 crypto: 00:02:52.267 00:02:52.267 compress: 00:02:52.267 00:02:52.267 regex: 00:02:52.267 00:02:52.267 vdpa: 00:02:52.267 00:02:52.267 event: 00:02:52.267 00:02:52.267 baseband: 00:02:52.267 00:02:52.267 gpu: 00:02:52.267 00:02:52.267 00:02:52.267 Message: 00:02:52.267 ================= 00:02:52.267 Content Skipped 00:02:52.267 ================= 00:02:52.267 00:02:52.267 apps: 00:02:52.267 00:02:52.267 libs: 00:02:52.267 kni: explicitly disabled via build config (deprecated lib) 00:02:52.267 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:52.267 00:02:52.267 drivers: 00:02:52.268 common/cpt: not in enabled drivers build config 00:02:52.268 common/dpaax: not in enabled drivers build config 00:02:52.268 common/iavf: not in enabled drivers build config 00:02:52.268 common/idpf: not in enabled drivers build config 00:02:52.268 common/mvep: not in enabled drivers build config 00:02:52.268 common/octeontx: not in enabled drivers build config 00:02:52.268 bus/auxiliary: not in enabled drivers build config 00:02:52.268 bus/dpaa: not in enabled drivers build config 00:02:52.268 bus/fslmc: not in enabled drivers build config 00:02:52.268 bus/ifpga: not in enabled drivers build config 00:02:52.268 bus/vmbus: not in enabled drivers build config 00:02:52.268 common/cnxk: not in enabled drivers build config 00:02:52.268 common/mlx5: not in enabled drivers build config 00:02:52.268 common/qat: not in enabled drivers build config 00:02:52.268 common/sfc_efx: not in enabled drivers build config 00:02:52.268 mempool/bucket: not in enabled drivers build config 00:02:52.268 mempool/cnxk: not in enabled drivers build config 00:02:52.268 mempool/dpaa: not in enabled drivers build config 00:02:52.268 mempool/dpaa2: not in enabled drivers build config 00:02:52.268 mempool/octeontx: not in enabled drivers build config 00:02:52.268 mempool/stack: not in enabled drivers build config 00:02:52.268 dma/cnxk: not in enabled drivers build config 00:02:52.268 dma/dpaa: not in enabled drivers build config 00:02:52.268 dma/dpaa2: not in enabled drivers build config 00:02:52.268 dma/hisilicon: not in enabled drivers build config 00:02:52.268 dma/idxd: not in enabled drivers build config 00:02:52.268 dma/ioat: not in enabled drivers build config 00:02:52.268 dma/skeleton: not in enabled drivers build config 00:02:52.268 net/af_packet: not in enabled drivers build config 00:02:52.268 net/af_xdp: not in enabled drivers build config 00:02:52.268 net/ark: not in enabled drivers build config 00:02:52.268 net/atlantic: not in enabled drivers build config 00:02:52.268 net/avp: not in enabled drivers build config 00:02:52.268 net/axgbe: not in enabled drivers build config 00:02:52.268 net/bnx2x: not in enabled drivers build config 00:02:52.268 net/bnxt: not in enabled drivers build config 00:02:52.268 net/bonding: not in enabled drivers build config 00:02:52.268 net/cnxk: not in enabled drivers build config 00:02:52.268 net/cxgbe: not in enabled drivers build config 00:02:52.268 net/dpaa: not in enabled drivers build config 00:02:52.268 net/dpaa2: not in enabled drivers build config 00:02:52.268 net/e1000: not in enabled drivers build config 00:02:52.268 net/ena: not in enabled drivers build config 00:02:52.268 net/enetc: not in enabled drivers build config 00:02:52.268 net/enetfec: not in enabled drivers build config 00:02:52.268 net/enic: not in enabled drivers build config 00:02:52.268 net/failsafe: not in enabled drivers build config 00:02:52.268 net/fm10k: not in enabled drivers build config 00:02:52.268 net/gve: not in enabled drivers build config 00:02:52.268 net/hinic: not in enabled drivers build config 00:02:52.268 net/hns3: not in enabled drivers build config 00:02:52.268 net/iavf: not in enabled drivers build config 00:02:52.268 net/ice: not in enabled drivers build config 00:02:52.268 net/idpf: not in enabled drivers build config 00:02:52.268 net/igc: not in enabled drivers build config 00:02:52.268 net/ionic: not in enabled drivers build config 00:02:52.268 net/ipn3ke: not in enabled drivers build config 00:02:52.268 net/ixgbe: not in enabled drivers build config 00:02:52.268 net/kni: not in enabled drivers build config 00:02:52.268 net/liquidio: not in enabled drivers build config 00:02:52.268 net/mana: not in enabled drivers build config 00:02:52.268 net/memif: not in enabled drivers build config 00:02:52.268 net/mlx4: not in enabled drivers build config 00:02:52.268 net/mlx5: not in enabled drivers build config 00:02:52.268 net/mvneta: not in enabled drivers build config 00:02:52.268 net/mvpp2: not in enabled drivers build config 00:02:52.268 net/netvsc: not in enabled drivers build config 00:02:52.268 net/nfb: not in enabled drivers build config 00:02:52.268 net/nfp: not in enabled drivers build config 00:02:52.268 net/ngbe: not in enabled drivers build config 00:02:52.268 net/null: not in enabled drivers build config 00:02:52.268 net/octeontx: not in enabled drivers build config 00:02:52.268 net/octeon_ep: not in enabled drivers build config 00:02:52.268 net/pcap: not in enabled drivers build config 00:02:52.268 net/pfe: not in enabled drivers build config 00:02:52.268 net/qede: not in enabled drivers build config 00:02:52.268 net/ring: not in enabled drivers build config 00:02:52.268 net/sfc: not in enabled drivers build config 00:02:52.268 net/softnic: not in enabled drivers build config 00:02:52.268 net/tap: not in enabled drivers build config 00:02:52.268 net/thunderx: not in enabled drivers build config 00:02:52.268 net/txgbe: not in enabled drivers build config 00:02:52.268 net/vdev_netvsc: not in enabled drivers build config 00:02:52.268 net/vhost: not in enabled drivers build config 00:02:52.268 net/virtio: not in enabled drivers build config 00:02:52.268 net/vmxnet3: not in enabled drivers build config 00:02:52.268 raw/cnxk_bphy: not in enabled drivers build config 00:02:52.268 raw/cnxk_gpio: not in enabled drivers build config 00:02:52.268 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:52.268 raw/ifpga: not in enabled drivers build config 00:02:52.268 raw/ntb: not in enabled drivers build config 00:02:52.268 raw/skeleton: not in enabled drivers build config 00:02:52.268 crypto/armv8: not in enabled drivers build config 00:02:52.268 crypto/bcmfs: not in enabled drivers build config 00:02:52.268 crypto/caam_jr: not in enabled drivers build config 00:02:52.268 crypto/ccp: not in enabled drivers build config 00:02:52.268 crypto/cnxk: not in enabled drivers build config 00:02:52.268 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.268 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.268 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.268 crypto/mlx5: not in enabled drivers build config 00:02:52.268 crypto/mvsam: not in enabled drivers build config 00:02:52.268 crypto/nitrox: not in enabled drivers build config 00:02:52.268 crypto/null: not in enabled drivers build config 00:02:52.268 crypto/octeontx: not in enabled drivers build config 00:02:52.268 crypto/openssl: not in enabled drivers build config 00:02:52.268 crypto/scheduler: not in enabled drivers build config 00:02:52.268 crypto/uadk: not in enabled drivers build config 00:02:52.268 crypto/virtio: not in enabled drivers build config 00:02:52.268 compress/isal: not in enabled drivers build config 00:02:52.268 compress/mlx5: not in enabled drivers build config 00:02:52.268 compress/octeontx: not in enabled drivers build config 00:02:52.268 compress/zlib: not in enabled drivers build config 00:02:52.268 regex/mlx5: not in enabled drivers build config 00:02:52.268 regex/cn9k: not in enabled drivers build config 00:02:52.268 vdpa/ifc: not in enabled drivers build config 00:02:52.268 vdpa/mlx5: not in enabled drivers build config 00:02:52.268 vdpa/sfc: not in enabled drivers build config 00:02:52.268 event/cnxk: not in enabled drivers build config 00:02:52.268 event/dlb2: not in enabled drivers build config 00:02:52.268 event/dpaa: not in enabled drivers build config 00:02:52.268 event/dpaa2: not in enabled drivers build config 00:02:52.268 event/dsw: not in enabled drivers build config 00:02:52.268 event/opdl: not in enabled drivers build config 00:02:52.268 event/skeleton: not in enabled drivers build config 00:02:52.268 event/sw: not in enabled drivers build config 00:02:52.268 event/octeontx: not in enabled drivers build config 00:02:52.268 baseband/acc: not in enabled drivers build config 00:02:52.268 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:52.268 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:52.268 baseband/la12xx: not in enabled drivers build config 00:02:52.268 baseband/null: not in enabled drivers build config 00:02:52.268 baseband/turbo_sw: not in enabled drivers build config 00:02:52.268 gpu/cuda: not in enabled drivers build config 00:02:52.268 00:02:52.268 00:02:52.268 Build targets in project: 311 00:02:52.268 00:02:52.268 DPDK 22.11.4 00:02:52.268 00:02:52.268 User defined options 00:02:52.268 libdir : lib 00:02:52.268 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:52.268 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:52.268 c_link_args : 00:02:52.268 enable_docs : false 00:02:52.268 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:52.268 enable_kmods : false 00:02:52.268 machine : native 00:02:52.268 tests : false 00:02:52.268 00:02:52.268 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.268 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:52.268 15:21:29 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:52.268 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:52.531 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:52.531 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:52.531 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:52.531 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:52.531 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.531 [6/740] Generating lib/rte_eal_def with a custom command 00:02:52.531 [7/740] Generating lib/rte_ring_def with a custom command 00:02:52.531 [8/740] Generating lib/rte_mempool_mingw with a custom command 00:02:52.531 [9/740] Generating lib/rte_rcu_mingw with a custom command 00:02:52.531 [10/740] Generating lib/rte_rcu_def with a custom command 00:02:52.531 [11/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:52.531 [12/740] Generating lib/rte_net_def with a custom command 00:02:52.531 [13/740] Generating lib/rte_eal_mingw with a custom command 00:02:52.531 [14/740] Generating lib/rte_ring_mingw with a custom command 00:02:52.531 [15/740] Generating lib/rte_mempool_def with a custom command 00:02:52.531 [16/740] Generating lib/rte_meter_def with a custom command 00:02:52.531 [17/740] Generating lib/rte_meter_mingw with a custom command 00:02:52.531 [18/740] Generating lib/rte_mbuf_def with a custom command 00:02:52.531 [19/740] Generating lib/rte_net_mingw with a custom command 00:02:52.531 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.531 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.531 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.531 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.531 [24/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.531 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.531 [26/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:52.531 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.531 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.531 [29/740] Generating lib/rte_ethdev_def with a custom command 00:02:52.531 [30/740] Generating lib/rte_pci_mingw with a custom command 00:02:52.531 [31/740] Generating lib/rte_pci_def with a custom command 00:02:52.531 [32/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:52.531 [33/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.531 [34/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:52.531 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.531 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.531 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.531 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.531 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:52.791 [40/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.791 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.791 [42/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.791 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.791 [44/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.791 [45/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.791 [46/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.791 [47/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:52.791 [48/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:52.791 [49/740] Generating lib/rte_cmdline_def with a custom command 00:02:52.791 [50/740] Generating lib/rte_metrics_def with a custom command 00:02:52.791 [51/740] Generating lib/rte_metrics_mingw with a custom command 00:02:52.791 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.791 [53/740] Generating lib/rte_hash_mingw with a custom command 00:02:52.791 [54/740] Linking static target lib/librte_kvargs.a 00:02:52.791 [55/740] Generating lib/rte_hash_def with a custom command 00:02:52.791 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.791 [57/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.791 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.791 [59/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.791 [60/740] Generating lib/rte_timer_def with a custom command 00:02:52.791 [61/740] Generating lib/rte_timer_mingw with a custom command 00:02:52.791 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:52.791 [63/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.791 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.791 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.791 [66/740] Generating lib/rte_acl_def with a custom command 00:02:52.791 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.791 [68/740] Generating lib/rte_acl_mingw with a custom command 00:02:52.791 [69/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:52.791 [70/740] Generating lib/rte_bbdev_def with a custom command 00:02:52.791 [71/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.791 [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.791 [73/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:52.791 [74/740] Generating lib/rte_bitratestats_def with a custom command 00:02:52.791 [75/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:52.791 [76/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.791 [77/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.791 [78/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.791 [79/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.791 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.791 [81/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.791 [82/740] Linking static target lib/librte_meter.a 00:02:52.791 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.791 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.791 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.792 [86/740] Linking static target lib/librte_pci.a 00:02:52.792 [87/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.792 [88/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:52.792 [89/740] Generating lib/rte_bpf_mingw with a custom command 00:02:52.792 [90/740] Generating lib/rte_cfgfile_def with a custom command 00:02:52.792 [91/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:52.792 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:52.792 [93/740] Generating lib/rte_bpf_def with a custom command 00:02:52.792 [94/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.792 [95/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.792 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:52.792 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.792 [98/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.792 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.792 [100/740] Generating lib/rte_compressdev_def with a custom command 00:02:52.792 [101/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.792 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.792 [103/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.792 [104/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.792 [105/740] Linking static target lib/librte_ring.a 00:02:52.792 [106/740] Generating lib/rte_cryptodev_def with a custom command 00:02:52.792 [107/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:52.792 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.792 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:52.792 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:52.792 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.792 [112/740] Generating lib/rte_distributor_mingw with a custom command 00:02:52.792 [113/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:52.792 [114/740] Generating lib/rte_distributor_def with a custom command 00:02:52.792 [115/740] Generating lib/rte_efd_def with a custom command 00:02:52.792 [116/740] Generating lib/rte_efd_mingw with a custom command 00:02:52.792 [117/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.792 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.792 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.058 [120/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.058 [121/740] Generating lib/rte_eventdev_def with a custom command 00:02:53.058 [122/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:53.058 [123/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:53.058 [124/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.058 [125/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.058 [126/740] Generating lib/rte_gpudev_def with a custom command 00:02:53.058 [127/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:53.058 [128/740] Generating lib/rte_gro_def with a custom command 00:02:53.059 [129/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.059 [130/740] Generating lib/rte_gro_mingw with a custom command 00:02:53.059 [131/740] Generating lib/rte_gso_def with a custom command 00:02:53.059 [132/740] Generating lib/rte_gso_mingw with a custom command 00:02:53.059 [133/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.059 [134/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.059 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.059 [136/740] Generating lib/rte_ip_frag_def with a custom command 00:02:53.059 [137/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:53.059 [138/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.059 [139/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.059 [140/740] Generating lib/rte_jobstats_def with a custom command 00:02:53.059 [141/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.059 [142/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.059 [143/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:53.059 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.318 [145/740] Linking target lib/librte_kvargs.so.23.0 00:02:53.318 [146/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:53.318 [147/740] Generating lib/rte_latencystats_def with a custom command 00:02:53.318 [148/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.318 [149/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.318 [150/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:53.318 [151/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.318 [152/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.318 [153/740] Linking static target lib/librte_cfgfile.a 00:02:53.318 [154/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.318 [155/740] Generating lib/rte_lpm_def with a custom command 00:02:53.318 [156/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.318 [157/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.318 [158/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.318 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.318 [160/740] Generating lib/rte_lpm_mingw with a custom command 00:02:53.318 [161/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.318 [162/740] Generating lib/rte_member_def with a custom command 00:02:53.318 [163/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.318 [164/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:53.318 [165/740] Generating lib/rte_member_mingw with a custom command 00:02:53.318 [166/740] Generating lib/rte_pcapng_def with a custom command 00:02:53.318 [167/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:53.319 [168/740] Linking static target lib/librte_jobstats.a 00:02:53.319 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.319 [170/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.319 [171/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.319 [172/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:53.319 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.319 [174/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.319 [175/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.319 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.319 [177/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.319 [178/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.319 [179/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.319 [180/740] Linking static target lib/librte_timer.a 00:02:53.319 [181/740] Generating lib/rte_power_def with a custom command 00:02:53.319 [182/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:53.319 [183/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:53.319 [184/740] Generating lib/rte_power_mingw with a custom command 00:02:53.319 [185/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.319 [186/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.319 [187/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.319 [188/740] Linking static target lib/librte_cmdline.a 00:02:53.319 [189/740] Linking static target lib/librte_metrics.a 00:02:53.319 [190/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.319 [191/740] Generating lib/rte_rawdev_def with a custom command 00:02:53.319 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:53.319 [193/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:53.319 [194/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.319 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:53.319 [196/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.319 [197/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:53.319 [198/740] Linking static target lib/librte_telemetry.a 00:02:53.319 [199/740] Generating lib/rte_regexdev_def with a custom command 00:02:53.319 [200/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.319 [201/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:53.319 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:53.319 [203/740] Generating lib/rte_dmadev_def with a custom command 00:02:53.319 [204/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:53.578 [205/740] Generating lib/rte_rib_def with a custom command 00:02:53.578 [206/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.578 [207/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.578 [208/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.578 [209/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.578 [210/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.578 [211/740] Generating lib/rte_rib_mingw with a custom command 00:02:53.578 [212/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:53.578 [213/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:53.578 [214/740] Generating lib/rte_reorder_mingw with a custom command 00:02:53.578 [215/740] Generating lib/rte_reorder_def with a custom command 00:02:53.578 [216/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.578 [217/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:53.578 [218/740] Linking static target lib/librte_bitratestats.a 00:02:53.578 [219/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:53.578 [220/740] Generating lib/rte_sched_def with a custom command 00:02:53.578 [221/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.578 [222/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:53.578 [223/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.578 [224/740] Generating lib/rte_sched_mingw with a custom command 00:02:53.578 [225/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.578 [226/740] Generating lib/rte_security_def with a custom command 00:02:53.578 [227/740] Generating lib/rte_security_mingw with a custom command 00:02:53.578 [228/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.579 [229/740] Linking static target lib/librte_net.a 00:02:53.579 [230/740] Generating lib/rte_stack_mingw with a custom command 00:02:53.579 [231/740] Generating lib/rte_stack_def with a custom command 00:02:53.579 [232/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.579 [233/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.579 [234/740] Generating lib/rte_vhost_mingw with a custom command 00:02:53.579 [235/740] Generating lib/rte_vhost_def with a custom command 00:02:53.579 [236/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:53.579 [237/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.579 [238/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:53.579 [239/740] Generating lib/rte_ipsec_def with a custom command 00:02:53.579 [240/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.579 [241/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:53.579 [242/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:53.579 [243/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.579 [244/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.579 [245/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.579 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:53.579 [247/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.579 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.579 [249/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:53.579 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:53.579 [251/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.579 [252/740] Generating lib/rte_fib_def with a custom command 00:02:53.579 [253/740] Generating lib/rte_fib_mingw with a custom command 00:02:53.579 [254/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:53.579 [255/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.579 [256/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:53.579 [257/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:53.579 [258/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.579 [259/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:53.579 [260/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:53.579 [261/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:53.579 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.579 [263/740] Linking static target lib/librte_stack.a 00:02:53.579 [264/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.841 [265/740] Generating lib/rte_port_def with a custom command 00:02:53.841 [266/740] Linking static target lib/librte_compressdev.a 00:02:53.841 [267/740] Generating lib/rte_port_mingw with a custom command 00:02:53.841 [268/740] Generating lib/rte_pdump_def with a custom command 00:02:53.841 [269/740] Generating lib/rte_pdump_mingw with a custom command 00:02:53.841 [270/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.841 [271/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.841 [272/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:53.841 [273/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.841 [274/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:53.841 [275/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.841 [276/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:53.841 [277/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.841 [278/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.841 [279/740] Linking static target lib/librte_rcu.a 00:02:53.841 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:53.841 [281/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:53.841 [282/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.841 [283/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.841 [284/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:53.841 [285/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.841 [286/740] Linking static target lib/librte_rawdev.a 00:02:53.841 [287/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:53.841 [288/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.841 [289/740] Generating lib/rte_table_def with a custom command 00:02:53.841 [290/740] Linking static target lib/librte_mempool.a 00:02:53.841 [291/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:53.841 [292/740] Generating lib/rte_table_mingw with a custom command 00:02:53.841 [293/740] Linking static target lib/librte_bbdev.a 00:02:53.841 [294/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:53.841 [295/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.842 [296/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:53.842 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:53.842 [298/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:53.842 [299/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:53.842 [300/740] Linking static target lib/librte_dmadev.a 00:02:53.842 [301/740] Linking static target lib/librte_gro.a 00:02:53.842 [302/740] Linking static target lib/librte_gpudev.a 00:02:53.842 [303/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.842 [304/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.107 [305/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.107 [306/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.107 [307/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:54.107 [308/740] Generating lib/rte_pipeline_def with a custom command 00:02:54.107 [309/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:54.107 [310/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:54.107 [311/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.107 [312/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.107 [313/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:54.107 [314/740] Linking static target lib/librte_gso.a 00:02:54.107 [315/740] Linking static target lib/librte_latencystats.a 00:02:54.107 [316/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:54.107 [317/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.107 [318/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:54.107 [319/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:54.107 [320/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:54.107 [321/740] Linking static target lib/librte_distributor.a 00:02:54.107 [322/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.107 [323/740] Generating lib/rte_graph_def with a custom command 00:02:54.107 [324/740] Linking target lib/librte_telemetry.so.23.0 00:02:54.107 [325/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.107 [326/740] Generating lib/rte_graph_mingw with a custom command 00:02:54.107 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:54.107 [328/740] Linking static target lib/librte_ip_frag.a 00:02:54.107 [329/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.107 [330/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.107 [331/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:54.107 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.107 [333/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:54.107 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:54.107 [335/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:54.107 [336/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.107 [337/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:54.107 [338/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:54.370 [339/740] Linking static target lib/librte_regexdev.a 00:02:54.370 [340/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.370 [341/740] Generating lib/rte_node_def with a custom command 00:02:54.370 [342/740] Generating lib/rte_node_mingw with a custom command 00:02:54.370 [343/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:54.370 [344/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:54.370 [345/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.370 [346/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.370 [347/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.370 [348/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:54.370 [349/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:54.370 [350/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.370 [351/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.370 [352/740] Linking static target lib/librte_eal.a 00:02:54.370 [353/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:54.370 [354/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.370 [355/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.370 [356/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.370 [357/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.370 [358/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:54.370 [359/740] Linking static target lib/librte_reorder.a 00:02:54.370 [360/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.370 [361/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:54.370 [362/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:54.370 [363/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.370 [364/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.370 [365/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:54.370 [366/740] Linking static target lib/librte_power.a 00:02:54.370 [367/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.370 [368/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:54.370 [369/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:54.370 [370/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:54.370 [371/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.370 [372/740] Linking static target lib/librte_pcapng.a 00:02:54.370 [373/740] Linking static target lib/librte_security.a 00:02:54.370 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:54.370 [375/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.370 [376/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.370 [377/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.370 [378/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.633 [379/740] Linking static target lib/librte_mbuf.a 00:02:54.633 [380/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:54.633 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:54.633 [382/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.633 [383/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:54.633 [384/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:54.633 [385/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.633 [386/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:54.633 [387/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.633 [388/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.633 [389/740] Linking static target lib/librte_bpf.a 00:02:54.633 [390/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:54.633 [391/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:54.633 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:54.633 [393/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.633 [394/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:54.633 [395/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:54.633 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:54.633 [397/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:54.633 [398/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.633 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:54.633 [400/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.633 [401/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.633 [402/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:54.633 [403/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:54.633 [404/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.633 [405/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:54.633 [406/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:54.633 [407/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:54.633 [408/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.633 [409/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:54.895 [410/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.895 [411/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:54.895 [412/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.895 [414/740] Linking static target lib/librte_lpm.a 00:02:54.895 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.895 [416/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.895 [417/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:54.895 [418/740] Linking static target lib/librte_rib.a 00:02:54.895 [419/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.895 [420/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:54.895 [421/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.895 [422/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:54.895 [423/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:54.895 [424/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.895 [425/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.895 [426/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:54.895 [427/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:54.895 [428/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [429/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [430/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:54.895 [431/740] Linking static target lib/librte_graph.a 00:02:54.895 [432/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.895 [433/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:54.895 [434/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:54.895 [435/740] Linking static target lib/librte_efd.a 00:02:54.895 [436/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [437/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:54.895 [438/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:54.895 [439/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.895 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.895 [441/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [442/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.895 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:55.163 [444/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:55.163 [445/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:55.164 [446/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.164 [447/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.164 [448/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.164 [449/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:55.164 [450/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.164 [451/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:55.164 [452/740] Linking static target drivers/librte_bus_vdev.a 00:02:55.164 [453/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.164 [454/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:55.164 [455/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.164 [456/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.164 [457/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:55.164 [458/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.164 [459/740] Linking static target lib/librte_fib.a 00:02:55.164 [460/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [461/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:55.422 [462/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [463/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [464/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.422 [465/740] Linking static target lib/librte_pdump.a 00:02:55.422 [466/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.422 [467/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:55.422 [468/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.422 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.422 [470/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [471/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [472/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:55.422 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.422 [474/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:55.422 [475/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:55.422 [476/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.422 [477/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:55.422 [478/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.422 [479/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:55.422 [480/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.422 [481/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:55.682 [482/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.682 [483/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.682 [484/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.682 [485/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.682 [486/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.682 [487/740] Linking static target drivers/librte_bus_pci.a 00:02:55.682 [488/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:55.682 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:55.682 [490/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.682 [491/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:55.682 [492/740] Linking static target lib/librte_table.a 00:02:55.682 [493/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:55.682 [494/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:55.682 [495/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:55.682 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:55.682 [497/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.682 [498/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:55.682 [499/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:55.682 [500/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.682 [501/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:55.682 [502/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:55.941 [503/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:55.941 [504/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.941 [505/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:55.941 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.941 [507/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:55.941 [508/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.941 [509/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.941 [510/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.941 [511/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:55.941 [512/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.941 [513/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:55.941 [514/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:55.941 [515/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:55.941 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.941 [517/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.941 [518/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.941 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:55.941 [520/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:55.941 [521/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.941 [522/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:55.941 [523/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:55.941 [524/740] Linking static target lib/librte_cryptodev.a 00:02:55.941 [525/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.941 [526/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:55.941 [527/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:55.941 [528/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.941 [529/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:55.941 [530/740] Linking static target lib/librte_sched.a 00:02:55.941 [531/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:55.941 [532/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.201 [533/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.201 [534/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:56.201 [535/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.201 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.201 [537/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.201 [538/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:56.201 [539/740] Linking static target lib/librte_ipsec.a 00:02:56.201 [540/740] Linking static target lib/librte_node.a 00:02:56.201 [541/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:56.201 [542/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:56.201 [543/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:56.201 [544/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:56.201 [545/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.201 [546/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:56.201 [547/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.201 [548/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.201 [549/740] Linking static target lib/librte_ethdev.a 00:02:56.201 [550/740] Linking static target drivers/librte_mempool_ring.a 00:02:56.202 [551/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.202 [552/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:56.202 [553/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:56.202 [554/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:56.202 [555/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:56.202 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.202 [557/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:56.202 [558/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:56.202 [559/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:56.202 [560/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:56.461 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:56.461 [562/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.461 [563/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:56.461 [564/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:56.461 [565/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.461 [566/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:56.461 [567/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:56.461 [568/740] Linking static target lib/librte_member.a 00:02:56.461 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:56.461 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.461 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:56.461 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:56.461 [573/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:56.461 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.461 [575/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:56.461 [576/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:56.461 [577/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:56.461 [578/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:56.461 [579/740] Linking static target lib/librte_eventdev.a 00:02:56.462 [580/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:56.462 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:56.462 [582/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:56.462 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:56.462 [584/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.462 [585/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:56.462 [586/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:56.462 [587/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:56.462 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.462 [589/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:56.462 [590/740] Linking static target lib/librte_port.a 00:02:56.721 [591/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:56.721 [592/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.721 [593/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.721 [594/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:56.721 [595/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:56.721 [596/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:56.721 [597/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:56.721 [598/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:56.721 [599/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:56.721 [600/740] Linking static target lib/librte_hash.a 00:02:56.721 [601/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:56.721 [602/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:56.980 [603/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.980 [604/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.980 [605/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.980 [606/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:56.980 [607/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:56.980 [608/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:56.980 [609/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:56.980 [610/740] Linking static target lib/librte_acl.a 00:02:57.238 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:57.238 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:57.497 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:57.497 [614/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.497 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:57.497 [616/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.755 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.013 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.013 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:58.326 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:58.326 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:58.916 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:58.916 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.483 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:59.483 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.483 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.483 [627/740] Linking static target drivers/librte_net_i40e.a 00:02:59.483 [628/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.741 [629/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.741 [630/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.741 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:59.741 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:00.677 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.936 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.936 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.936 [636/740] Linking static target lib/librte_vhost.a 00:03:06.503 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.503 [638/740] Linking static target lib/librte_pipeline.a 00:03:07.070 [639/740] Linking target app/dpdk-test-gpudev 00:03:07.070 [640/740] Linking target app/dpdk-dumpcap 00:03:07.070 [641/740] Linking target app/dpdk-test-security-perf 00:03:07.070 [642/740] Linking target app/dpdk-test-cmdline 00:03:07.070 [643/740] Linking target app/dpdk-test-acl 00:03:07.070 [644/740] Linking target app/dpdk-pdump 00:03:07.070 [645/740] Linking target app/dpdk-proc-info 00:03:07.070 [646/740] Linking target app/dpdk-test-sad 00:03:07.070 [647/740] Linking target app/dpdk-test-pipeline 00:03:07.070 [648/740] Linking target app/dpdk-test-regex 00:03:07.070 [649/740] Linking target app/dpdk-test-compress-perf 00:03:07.070 [650/740] Linking target app/dpdk-test-fib 00:03:07.070 [651/740] Linking target app/dpdk-test-flow-perf 00:03:07.070 [652/740] Linking target app/dpdk-test-bbdev 00:03:07.070 [653/740] Linking target app/dpdk-test-crypto-perf 00:03:07.070 [654/740] Linking target app/dpdk-test-eventdev 00:03:07.070 [655/740] Linking target app/dpdk-testpmd 00:03:08.006 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.574 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.574 [658/740] Linking target lib/librte_eal.so.23.0 00:03:08.833 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:08.833 [660/740] Linking target lib/librte_stack.so.23.0 00:03:08.833 [661/740] Linking target lib/librte_ring.so.23.0 00:03:08.833 [662/740] Linking target lib/librte_jobstats.so.23.0 00:03:08.833 [663/740] Linking target lib/librte_meter.so.23.0 00:03:08.833 [664/740] Linking target lib/librte_pci.so.23.0 00:03:08.833 [665/740] Linking target lib/librte_timer.so.23.0 00:03:08.833 [666/740] Linking target lib/librte_rawdev.so.23.0 00:03:08.833 [667/740] Linking target lib/librte_cfgfile.so.23.0 00:03:08.833 [668/740] Linking target lib/librte_dmadev.so.23.0 00:03:08.833 [669/740] Linking target lib/librte_graph.so.23.0 00:03:08.833 [670/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:08.833 [671/740] Linking target lib/librte_acl.so.23.0 00:03:08.833 [672/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:08.833 [673/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:08.833 [674/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:08.833 [675/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:08.833 [676/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:08.833 [677/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:08.833 [678/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:08.833 [679/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:09.092 [680/740] Linking target lib/librte_rcu.so.23.0 00:03:09.092 [681/740] Linking target lib/librte_mempool.so.23.0 00:03:09.092 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:09.092 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:09.092 [684/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:09.092 [685/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:09.092 [686/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:09.092 [687/740] Linking target lib/librte_rib.so.23.0 00:03:09.092 [688/740] Linking target lib/librte_mbuf.so.23.0 00:03:09.350 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:09.350 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:09.350 [691/740] Linking target lib/librte_fib.so.23.0 00:03:09.350 [692/740] Linking target lib/librte_compressdev.so.23.0 00:03:09.350 [693/740] Linking target lib/librte_regexdev.so.23.0 00:03:09.350 [694/740] Linking target lib/librte_net.so.23.0 00:03:09.350 [695/740] Linking target lib/librte_bbdev.so.23.0 00:03:09.350 [696/740] Linking target lib/librte_reorder.so.23.0 00:03:09.350 [697/740] Linking target lib/librte_distributor.so.23.0 00:03:09.350 [698/740] Linking target lib/librte_gpudev.so.23.0 00:03:09.350 [699/740] Linking target lib/librte_sched.so.23.0 00:03:09.350 [700/740] Linking target lib/librte_cryptodev.so.23.0 00:03:09.608 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:09.608 [702/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:09.608 [703/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:09.608 [704/740] Linking target lib/librte_hash.so.23.0 00:03:09.608 [705/740] Linking target lib/librte_cmdline.so.23.0 00:03:09.608 [706/740] Linking target lib/librte_security.so.23.0 00:03:09.608 [707/740] Linking target lib/librte_ethdev.so.23.0 00:03:09.608 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:09.608 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:09.608 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:09.608 [711/740] Linking target lib/librte_member.so.23.0 00:03:09.866 [712/740] Linking target lib/librte_efd.so.23.0 00:03:09.866 [713/740] Linking target lib/librte_lpm.so.23.0 00:03:09.866 [714/740] Linking target lib/librte_ipsec.so.23.0 00:03:09.866 [715/740] Linking target lib/librte_pcapng.so.23.0 00:03:09.866 [716/740] Linking target lib/librte_gro.so.23.0 00:03:09.866 [717/740] Linking target lib/librte_metrics.so.23.0 00:03:09.866 [718/740] Linking target lib/librte_ip_frag.so.23.0 00:03:09.866 [719/740] Linking target lib/librte_gso.so.23.0 00:03:09.866 [720/740] Linking target lib/librte_bpf.so.23.0 00:03:09.866 [721/740] Linking target lib/librte_eventdev.so.23.0 00:03:09.866 [722/740] Linking target lib/librte_power.so.23.0 00:03:09.866 [723/740] Linking target lib/librte_vhost.so.23.0 00:03:09.866 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:09.866 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:09.866 [726/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:09.866 [727/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:09.866 [728/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:09.866 [729/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:09.866 [730/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:09.866 [731/740] Linking target lib/librte_node.so.23.0 00:03:09.866 [732/740] Linking target lib/librte_pdump.so.23.0 00:03:09.866 [733/740] Linking target lib/librte_bitratestats.so.23.0 00:03:09.866 [734/740] Linking target lib/librte_latencystats.so.23.0 00:03:09.866 [735/740] Linking target lib/librte_port.so.23.0 00:03:10.124 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:10.124 [737/740] Linking target lib/librte_table.so.23.0 00:03:10.381 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:11.756 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.756 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:11.756 15:21:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:11.756 15:21:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.756 15:21:49 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:03:11.756 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:03:11.756 [0/1] Installing files. 00:03:11.756 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.756 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.757 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.758 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.759 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:12.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:12.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:12.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:12.022 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.022 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.283 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.284 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.284 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.284 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.284 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.284 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.284 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.285 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.286 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.287 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.287 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:12.287 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:12.287 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:12.287 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:12.287 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:12.287 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:12.287 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:12.287 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:12.287 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:12.288 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:12.288 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:12.288 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:12.288 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:12.288 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:12.288 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:12.288 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:03:12.288 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:12.288 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:12.288 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:12.288 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:12.288 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:12.288 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:12.288 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:12.288 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:12.288 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:12.288 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:12.288 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:12.288 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:12.288 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:12.288 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:12.288 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:12.288 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:12.288 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:12.288 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:12.288 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:12.288 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:12.288 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:12.288 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:12.288 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:12.288 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:12.288 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:12.288 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:12.288 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:12.288 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:12.288 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:12.288 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:12.288 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:12.288 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:12.288 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:12.288 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:12.288 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:12.288 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:12.288 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:12.288 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:12.288 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:12.288 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:12.288 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:12.288 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:12.288 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:12.288 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:12.288 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:12.288 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:12.288 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:12.288 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:12.288 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:12.288 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:03:12.288 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:12.288 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:12.288 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:12.288 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:03:12.288 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:12.288 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:12.288 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:12.288 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:12.288 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:12.288 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:12.288 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:12.288 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:12.288 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:12.288 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:12.288 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:12.288 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:12.288 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:12.288 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:03:12.288 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:12.288 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:12.288 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:12.288 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:12.288 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:12.288 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:12.289 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:12.289 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:12.289 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:12.289 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:03:12.289 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:12.289 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:12.289 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:12.289 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:12.289 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:12.289 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:12.289 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:12.289 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:12.289 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:12.289 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:12.289 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:12.289 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:12.289 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:12.289 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:12.289 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:12.289 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:03:12.289 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:12.289 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:12.289 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:12.289 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:12.289 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:12.289 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:03:12.289 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:12.289 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:12.289 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:12.289 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:12.289 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:12.289 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:12.289 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:12.289 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:12.289 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:12.289 15:21:49 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:12.289 15:21:49 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.289 00:03:12.289 real 0m25.977s 00:03:12.289 user 6m39.100s 00:03:12.289 sys 2m15.444s 00:03:12.289 15:21:49 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:12.289 15:21:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.289 ************************************ 00:03:12.289 END TEST build_native_dpdk 00:03:12.289 ************************************ 00:03:12.289 15:21:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.289 15:21:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.289 15:21:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:03:12.548 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.548 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:03:12.548 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:03:12.548 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:03:13.116 Using 'verbs' RDMA provider 00:03:26.248 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:41.135 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:41.135 Creating mk/config.mk...done. 00:03:41.135 Creating mk/cc.flags.mk...done. 00:03:41.135 Type 'make' to build. 00:03:41.135 15:22:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:03:41.135 15:22:17 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:41.135 15:22:17 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:41.135 15:22:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.135 ************************************ 00:03:41.135 START TEST make 00:03:41.135 ************************************ 00:03:41.135 15:22:17 make -- common/autotest_common.sh@1127 -- $ make -j112 00:03:41.135 make[1]: Nothing to be done for 'all'. 00:04:13.228 CC lib/ut_mock/mock.o 00:04:13.228 CC lib/log/log.o 00:04:13.228 CC lib/log/log_flags.o 00:04:13.228 CC lib/log/log_deprecated.o 00:04:13.228 CC lib/ut/ut.o 00:04:13.228 LIB libspdk_ut_mock.a 00:04:13.228 LIB libspdk_ut.a 00:04:13.228 LIB libspdk_log.a 00:04:13.228 SO libspdk_ut_mock.so.6.0 00:04:13.228 SO libspdk_ut.so.2.0 00:04:13.228 SO libspdk_log.so.7.1 00:04:13.228 SYMLINK libspdk_ut_mock.so 00:04:13.228 SYMLINK libspdk_ut.so 00:04:13.228 SYMLINK libspdk_log.so 00:04:13.228 CXX lib/trace_parser/trace.o 00:04:13.228 CC lib/ioat/ioat.o 00:04:13.228 CC lib/dma/dma.o 00:04:13.228 CC lib/util/base64.o 00:04:13.228 CC lib/util/bit_array.o 00:04:13.228 CC lib/util/cpuset.o 00:04:13.228 CC lib/util/crc16.o 00:04:13.228 CC lib/util/crc32.o 00:04:13.228 CC lib/util/crc32c.o 00:04:13.228 CC lib/util/crc32_ieee.o 00:04:13.228 CC lib/util/crc64.o 00:04:13.228 CC lib/util/dif.o 00:04:13.228 CC lib/util/fd.o 00:04:13.228 CC lib/util/fd_group.o 00:04:13.228 CC lib/util/file.o 00:04:13.228 CC lib/util/hexlify.o 00:04:13.228 CC lib/util/pipe.o 00:04:13.228 CC lib/util/iov.o 00:04:13.228 CC lib/util/math.o 00:04:13.228 CC lib/util/net.o 00:04:13.228 CC lib/util/strerror_tls.o 00:04:13.228 CC lib/util/string.o 00:04:13.228 CC lib/util/uuid.o 00:04:13.228 CC lib/util/xor.o 00:04:13.228 CC lib/util/zipf.o 00:04:13.228 CC lib/util/md5.o 00:04:13.228 CC lib/vfio_user/host/vfio_user_pci.o 00:04:13.228 CC lib/vfio_user/host/vfio_user.o 00:04:13.228 LIB libspdk_dma.a 00:04:13.228 SO libspdk_dma.so.5.0 00:04:13.228 LIB libspdk_ioat.a 00:04:13.228 SYMLINK libspdk_dma.so 00:04:13.228 SO libspdk_ioat.so.7.0 00:04:13.228 LIB libspdk_vfio_user.a 00:04:13.228 SYMLINK libspdk_ioat.so 00:04:13.228 SO libspdk_vfio_user.so.5.0 00:04:13.228 LIB libspdk_util.a 00:04:13.228 SYMLINK libspdk_vfio_user.so 00:04:13.228 SO libspdk_util.so.10.0 00:04:13.228 SYMLINK libspdk_util.so 00:04:13.228 LIB libspdk_trace_parser.a 00:04:13.228 SO libspdk_trace_parser.so.6.0 00:04:13.228 SYMLINK libspdk_trace_parser.so 00:04:13.228 CC lib/env_dpdk/env.o 00:04:13.228 CC lib/env_dpdk/memory.o 00:04:13.228 CC lib/conf/conf.o 00:04:13.228 CC lib/env_dpdk/pci.o 00:04:13.228 CC lib/env_dpdk/pci_ioat.o 00:04:13.228 CC lib/env_dpdk/init.o 00:04:13.228 CC lib/env_dpdk/threads.o 00:04:13.228 CC lib/env_dpdk/pci_virtio.o 00:04:13.228 CC lib/env_dpdk/pci_vmd.o 00:04:13.228 CC lib/env_dpdk/pci_idxd.o 00:04:13.228 CC lib/env_dpdk/pci_event.o 00:04:13.228 CC lib/env_dpdk/sigbus_handler.o 00:04:13.228 CC lib/rdma_utils/rdma_utils.o 00:04:13.228 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.228 CC lib/env_dpdk/pci_dpdk.o 00:04:13.228 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.228 CC lib/vmd/vmd.o 00:04:13.228 CC lib/vmd/led.o 00:04:13.228 CC lib/idxd/idxd.o 00:04:13.228 CC lib/idxd/idxd_user.o 00:04:13.228 CC lib/idxd/idxd_kernel.o 00:04:13.228 CC lib/json/json_parse.o 00:04:13.228 CC lib/rdma_provider/common.o 00:04:13.228 CC lib/json/json_util.o 00:04:13.228 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:13.228 CC lib/json/json_write.o 00:04:13.228 LIB libspdk_rdma_provider.a 00:04:13.228 LIB libspdk_conf.a 00:04:13.228 SO libspdk_rdma_provider.so.6.0 00:04:13.228 SO libspdk_conf.so.6.0 00:04:13.228 LIB libspdk_rdma_utils.a 00:04:13.228 LIB libspdk_json.a 00:04:13.228 SYMLINK libspdk_conf.so 00:04:13.228 SYMLINK libspdk_rdma_provider.so 00:04:13.228 SO libspdk_rdma_utils.so.1.0 00:04:13.228 SO libspdk_json.so.6.0 00:04:13.228 SYMLINK libspdk_rdma_utils.so 00:04:13.228 SYMLINK libspdk_json.so 00:04:13.228 LIB libspdk_idxd.a 00:04:13.228 LIB libspdk_vmd.a 00:04:13.228 SO libspdk_idxd.so.12.1 00:04:13.228 SO libspdk_vmd.so.6.0 00:04:13.228 SYMLINK libspdk_idxd.so 00:04:13.228 SYMLINK libspdk_vmd.so 00:04:13.228 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.228 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:13.228 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.228 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.228 LIB libspdk_jsonrpc.a 00:04:13.228 SO libspdk_jsonrpc.so.6.0 00:04:13.228 LIB libspdk_env_dpdk.a 00:04:13.228 SYMLINK libspdk_jsonrpc.so 00:04:13.228 SO libspdk_env_dpdk.so.15.1 00:04:13.228 SYMLINK libspdk_env_dpdk.so 00:04:13.228 CC lib/rpc/rpc.o 00:04:13.228 LIB libspdk_rpc.a 00:04:13.228 SO libspdk_rpc.so.6.0 00:04:13.228 SYMLINK libspdk_rpc.so 00:04:13.228 CC lib/keyring/keyring_rpc.o 00:04:13.228 CC lib/keyring/keyring.o 00:04:13.228 CC lib/trace/trace.o 00:04:13.228 CC lib/trace/trace_flags.o 00:04:13.228 CC lib/trace/trace_rpc.o 00:04:13.228 CC lib/notify/notify_rpc.o 00:04:13.228 CC lib/notify/notify.o 00:04:13.228 LIB libspdk_notify.a 00:04:13.228 LIB libspdk_keyring.a 00:04:13.228 SO libspdk_notify.so.6.0 00:04:13.228 LIB libspdk_trace.a 00:04:13.228 SO libspdk_keyring.so.2.0 00:04:13.228 SO libspdk_trace.so.11.0 00:04:13.228 SYMLINK libspdk_notify.so 00:04:13.228 SYMLINK libspdk_keyring.so 00:04:13.228 SYMLINK libspdk_trace.so 00:04:13.487 CC lib/thread/thread.o 00:04:13.487 CC lib/thread/iobuf.o 00:04:13.487 CC lib/sock/sock_rpc.o 00:04:13.487 CC lib/sock/sock.o 00:04:14.054 LIB libspdk_sock.a 00:04:14.054 SO libspdk_sock.so.10.0 00:04:14.054 SYMLINK libspdk_sock.so 00:04:14.312 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:14.312 CC lib/nvme/nvme_ns_cmd.o 00:04:14.312 CC lib/nvme/nvme_ctrlr.o 00:04:14.312 CC lib/nvme/nvme_fabric.o 00:04:14.312 CC lib/nvme/nvme_ns.o 00:04:14.312 CC lib/nvme/nvme_qpair.o 00:04:14.312 CC lib/nvme/nvme_pcie_common.o 00:04:14.312 CC lib/nvme/nvme_pcie.o 00:04:14.312 CC lib/nvme/nvme.o 00:04:14.312 CC lib/nvme/nvme_quirks.o 00:04:14.312 CC lib/nvme/nvme_transport.o 00:04:14.312 CC lib/nvme/nvme_discovery.o 00:04:14.312 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.312 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.312 CC lib/nvme/nvme_tcp.o 00:04:14.312 CC lib/nvme/nvme_opal.o 00:04:14.312 CC lib/nvme/nvme_io_msg.o 00:04:14.312 CC lib/nvme/nvme_poll_group.o 00:04:14.312 CC lib/nvme/nvme_zns.o 00:04:14.312 CC lib/nvme/nvme_stubs.o 00:04:14.312 CC lib/nvme/nvme_auth.o 00:04:14.312 CC lib/nvme/nvme_cuse.o 00:04:14.312 CC lib/nvme/nvme_rdma.o 00:04:14.571 LIB libspdk_thread.a 00:04:14.571 SO libspdk_thread.so.11.0 00:04:14.831 SYMLINK libspdk_thread.so 00:04:15.089 CC lib/accel/accel_sw.o 00:04:15.089 CC lib/accel/accel_rpc.o 00:04:15.089 CC lib/accel/accel.o 00:04:15.089 CC lib/virtio/virtio_vfio_user.o 00:04:15.089 CC lib/virtio/virtio.o 00:04:15.089 CC lib/virtio/virtio_vhost_user.o 00:04:15.089 CC lib/virtio/virtio_pci.o 00:04:15.089 CC lib/fsdev/fsdev.o 00:04:15.089 CC lib/init/subsystem.o 00:04:15.089 CC lib/init/json_config.o 00:04:15.089 CC lib/fsdev/fsdev_io.o 00:04:15.089 CC lib/fsdev/fsdev_rpc.o 00:04:15.089 CC lib/init/subsystem_rpc.o 00:04:15.089 CC lib/init/rpc.o 00:04:15.089 CC lib/blob/zeroes.o 00:04:15.089 CC lib/blob/blobstore.o 00:04:15.089 CC lib/blob/request.o 00:04:15.089 CC lib/blob/blob_bs_dev.o 00:04:15.348 LIB libspdk_init.a 00:04:15.348 SO libspdk_init.so.6.0 00:04:15.348 LIB libspdk_virtio.a 00:04:15.348 SO libspdk_virtio.so.7.0 00:04:15.348 SYMLINK libspdk_init.so 00:04:15.607 SYMLINK libspdk_virtio.so 00:04:15.607 LIB libspdk_fsdev.a 00:04:15.607 SO libspdk_fsdev.so.2.0 00:04:15.607 SYMLINK libspdk_fsdev.so 00:04:15.865 CC lib/event/app.o 00:04:15.865 CC lib/event/reactor.o 00:04:15.865 CC lib/event/scheduler_static.o 00:04:15.865 CC lib/event/log_rpc.o 00:04:15.865 CC lib/event/app_rpc.o 00:04:15.865 LIB libspdk_accel.a 00:04:15.865 SO libspdk_accel.so.16.0 00:04:16.127 SYMLINK libspdk_accel.so 00:04:16.127 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:16.127 LIB libspdk_nvme.a 00:04:16.127 LIB libspdk_event.a 00:04:16.127 SO libspdk_event.so.14.0 00:04:16.127 SO libspdk_nvme.so.14.1 00:04:16.127 SYMLINK libspdk_event.so 00:04:16.447 CC lib/bdev/bdev.o 00:04:16.447 CC lib/bdev/bdev_rpc.o 00:04:16.447 CC lib/bdev/part.o 00:04:16.447 CC lib/bdev/bdev_zone.o 00:04:16.447 CC lib/bdev/scsi_nvme.o 00:04:16.447 SYMLINK libspdk_nvme.so 00:04:16.447 LIB libspdk_fuse_dispatcher.a 00:04:16.447 SO libspdk_fuse_dispatcher.so.1.0 00:04:16.727 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.293 LIB libspdk_blob.a 00:04:17.293 SO libspdk_blob.so.11.0 00:04:17.293 SYMLINK libspdk_blob.so 00:04:17.551 CC lib/blobfs/blobfs.o 00:04:17.551 CC lib/blobfs/tree.o 00:04:17.809 CC lib/lvol/lvol.o 00:04:18.067 LIB libspdk_bdev.a 00:04:18.325 SO libspdk_bdev.so.17.0 00:04:18.325 LIB libspdk_blobfs.a 00:04:18.325 SO libspdk_blobfs.so.10.0 00:04:18.325 SYMLINK libspdk_bdev.so 00:04:18.325 LIB libspdk_lvol.a 00:04:18.325 SYMLINK libspdk_blobfs.so 00:04:18.325 SO libspdk_lvol.so.10.0 00:04:18.585 SYMLINK libspdk_lvol.so 00:04:18.585 CC lib/nbd/nbd.o 00:04:18.585 CC lib/nbd/nbd_rpc.o 00:04:18.585 CC lib/ftl/ftl_core.o 00:04:18.585 CC lib/ftl/ftl_init.o 00:04:18.585 CC lib/ftl/ftl_io.o 00:04:18.585 CC lib/ftl/ftl_layout.o 00:04:18.585 CC lib/ftl/ftl_debug.o 00:04:18.585 CC lib/ftl/ftl_sb.o 00:04:18.585 CC lib/ftl/ftl_l2p.o 00:04:18.585 CC lib/nvmf/subsystem.o 00:04:18.585 CC lib/scsi/lun.o 00:04:18.585 CC lib/nvmf/ctrlr.o 00:04:18.585 CC lib/ftl/ftl_l2p_flat.o 00:04:18.585 CC lib/nvmf/ctrlr_bdev.o 00:04:18.585 CC lib/nvmf/ctrlr_discovery.o 00:04:18.585 CC lib/ftl/ftl_nv_cache.o 00:04:18.585 CC lib/scsi/port.o 00:04:18.585 CC lib/scsi/dev.o 00:04:18.585 CC lib/ftl/ftl_band.o 00:04:18.585 CC lib/ftl/ftl_band_ops.o 00:04:18.585 CC lib/nvmf/nvmf.o 00:04:18.585 CC lib/ftl/ftl_writer.o 00:04:18.585 CC lib/nvmf/tcp.o 00:04:18.585 CC lib/nvmf/nvmf_rpc.o 00:04:18.585 CC lib/ftl/ftl_rq.o 00:04:18.585 CC lib/ftl/ftl_p2l.o 00:04:18.585 CC lib/scsi/scsi.o 00:04:18.585 CC lib/ftl/ftl_reloc.o 00:04:18.585 CC lib/ftl/ftl_l2p_cache.o 00:04:18.585 CC lib/nvmf/transport.o 00:04:18.585 CC lib/scsi/scsi_bdev.o 00:04:18.585 CC lib/scsi/scsi_pr.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.585 CC lib/nvmf/stubs.o 00:04:18.585 CC lib/ftl/ftl_p2l_log.o 00:04:18.585 CC lib/scsi/scsi_rpc.o 00:04:18.585 CC lib/nvmf/mdns_server.o 00:04:18.585 CC lib/scsi/task.o 00:04:18.585 CC lib/nvmf/rdma.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.585 CC lib/nvmf/auth.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:18.585 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:18.585 CC lib/ftl/utils/ftl_md.o 00:04:18.585 CC lib/ftl/utils/ftl_conf.o 00:04:18.585 CC lib/ftl/utils/ftl_mempool.o 00:04:18.585 CC lib/ublk/ublk.o 00:04:18.585 CC lib/ftl/utils/ftl_bitmap.o 00:04:18.585 CC lib/ftl/utils/ftl_property.o 00:04:18.585 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:18.585 CC lib/ublk/ublk_rpc.o 00:04:18.585 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:18.585 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:18.585 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:18.585 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:18.585 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:18.585 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:18.844 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:18.844 CC lib/ftl/base/ftl_base_dev.o 00:04:18.844 CC lib/ftl/ftl_trace.o 00:04:18.844 CC lib/ftl/base/ftl_base_bdev.o 00:04:19.103 LIB libspdk_nbd.a 00:04:19.103 SO libspdk_nbd.so.7.0 00:04:19.362 SYMLINK libspdk_nbd.so 00:04:19.362 LIB libspdk_scsi.a 00:04:19.362 LIB libspdk_ublk.a 00:04:19.362 SO libspdk_scsi.so.9.0 00:04:19.362 SO libspdk_ublk.so.3.0 00:04:19.362 SYMLINK libspdk_ublk.so 00:04:19.362 SYMLINK libspdk_scsi.so 00:04:19.621 LIB libspdk_ftl.a 00:04:19.621 SO libspdk_ftl.so.9.0 00:04:19.880 CC lib/vhost/vhost.o 00:04:19.880 CC lib/vhost/vhost_rpc.o 00:04:19.880 CC lib/vhost/vhost_scsi.o 00:04:19.880 CC lib/vhost/vhost_blk.o 00:04:19.880 CC lib/vhost/rte_vhost_user.o 00:04:19.880 CC lib/iscsi/iscsi.o 00:04:19.880 CC lib/iscsi/conn.o 00:04:19.880 CC lib/iscsi/init_grp.o 00:04:19.880 CC lib/iscsi/tgt_node.o 00:04:19.880 CC lib/iscsi/param.o 00:04:19.880 CC lib/iscsi/iscsi_subsystem.o 00:04:19.880 CC lib/iscsi/portal_grp.o 00:04:19.880 CC lib/iscsi/iscsi_rpc.o 00:04:19.880 CC lib/iscsi/task.o 00:04:19.880 SYMLINK libspdk_ftl.so 00:04:20.446 LIB libspdk_nvmf.a 00:04:20.446 SO libspdk_nvmf.so.20.0 00:04:20.446 LIB libspdk_vhost.a 00:04:20.705 SO libspdk_vhost.so.8.0 00:04:20.705 SYMLINK libspdk_nvmf.so 00:04:20.705 SYMLINK libspdk_vhost.so 00:04:20.705 LIB libspdk_iscsi.a 00:04:20.963 SO libspdk_iscsi.so.8.0 00:04:20.963 SYMLINK libspdk_iscsi.so 00:04:21.529 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.787 CC module/scheduler/gscheduler/gscheduler.o 00:04:21.787 CC module/accel/ioat/accel_ioat.o 00:04:21.787 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.787 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.787 CC module/accel/dsa/accel_dsa.o 00:04:21.787 CC module/keyring/file/keyring.o 00:04:21.787 CC module/keyring/file/keyring_rpc.o 00:04:21.787 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.787 CC module/accel/error/accel_error.o 00:04:21.787 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:21.787 CC module/accel/error/accel_error_rpc.o 00:04:21.787 CC module/blob/bdev/blob_bdev.o 00:04:21.787 LIB libspdk_env_dpdk_rpc.a 00:04:21.787 CC module/accel/iaa/accel_iaa.o 00:04:21.787 CC module/fsdev/aio/fsdev_aio.o 00:04:21.787 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:21.787 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.787 CC module/sock/posix/posix.o 00:04:21.787 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.787 CC module/keyring/linux/keyring.o 00:04:21.787 CC module/keyring/linux/keyring_rpc.o 00:04:21.787 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.787 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.787 LIB libspdk_scheduler_gscheduler.a 00:04:21.787 LIB libspdk_keyring_file.a 00:04:21.787 LIB libspdk_keyring_linux.a 00:04:21.787 LIB libspdk_scheduler_dpdk_governor.a 00:04:21.787 LIB libspdk_accel_ioat.a 00:04:21.787 SO libspdk_scheduler_gscheduler.so.4.0 00:04:21.787 LIB libspdk_accel_error.a 00:04:21.787 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:21.787 LIB libspdk_scheduler_dynamic.a 00:04:21.787 SO libspdk_accel_ioat.so.6.0 00:04:21.787 SO libspdk_keyring_file.so.2.0 00:04:21.787 SO libspdk_keyring_linux.so.1.0 00:04:22.052 LIB libspdk_accel_iaa.a 00:04:22.052 SO libspdk_accel_error.so.2.0 00:04:22.052 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.052 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.052 SO libspdk_accel_iaa.so.3.0 00:04:22.052 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.052 SYMLINK libspdk_keyring_linux.so 00:04:22.052 LIB libspdk_blob_bdev.a 00:04:22.052 LIB libspdk_accel_dsa.a 00:04:22.052 SYMLINK libspdk_keyring_file.so 00:04:22.052 SYMLINK libspdk_accel_ioat.so 00:04:22.052 SYMLINK libspdk_accel_error.so 00:04:22.052 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.052 SO libspdk_blob_bdev.so.11.0 00:04:22.052 SO libspdk_accel_dsa.so.5.0 00:04:22.052 SYMLINK libspdk_accel_iaa.so 00:04:22.052 SYMLINK libspdk_blob_bdev.so 00:04:22.052 SYMLINK libspdk_accel_dsa.so 00:04:22.310 LIB libspdk_fsdev_aio.a 00:04:22.310 LIB libspdk_sock_posix.a 00:04:22.310 SO libspdk_fsdev_aio.so.1.0 00:04:22.310 SO libspdk_sock_posix.so.6.0 00:04:22.310 SYMLINK libspdk_fsdev_aio.so 00:04:22.310 SYMLINK libspdk_sock_posix.so 00:04:22.569 CC module/bdev/gpt/gpt.o 00:04:22.569 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.569 CC module/bdev/malloc/bdev_malloc.o 00:04:22.569 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:22.569 CC module/bdev/split/vbdev_split.o 00:04:22.569 CC module/bdev/null/bdev_null_rpc.o 00:04:22.569 CC module/bdev/split/vbdev_split_rpc.o 00:04:22.569 CC module/bdev/null/bdev_null.o 00:04:22.569 CC module/bdev/raid/bdev_raid.o 00:04:22.569 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.569 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.569 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.569 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:22.569 CC module/bdev/raid/raid0.o 00:04:22.569 CC module/bdev/raid/concat.o 00:04:22.569 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.569 CC module/bdev/delay/vbdev_delay.o 00:04:22.569 CC module/bdev/raid/raid1.o 00:04:22.569 CC module/bdev/passthru/vbdev_passthru.o 00:04:22.569 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:22.569 CC module/bdev/iscsi/bdev_iscsi.o 00:04:22.569 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.569 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:22.569 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:22.569 CC module/bdev/nvme/bdev_nvme.o 00:04:22.569 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.569 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.569 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.569 CC module/bdev/error/vbdev_error.o 00:04:22.569 CC module/bdev/nvme/nvme_rpc.o 00:04:22.569 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:22.569 CC module/bdev/nvme/vbdev_opal.o 00:04:22.569 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:22.569 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:22.569 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:22.569 CC module/bdev/ftl/bdev_ftl.o 00:04:22.569 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.569 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:22.569 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.569 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.569 CC module/bdev/aio/bdev_aio.o 00:04:22.569 CC module/bdev/aio/bdev_aio_rpc.o 00:04:22.828 LIB libspdk_bdev_split.a 00:04:22.828 LIB libspdk_blobfs_bdev.a 00:04:22.828 LIB libspdk_bdev_gpt.a 00:04:22.828 SO libspdk_bdev_split.so.6.0 00:04:22.828 LIB libspdk_bdev_null.a 00:04:22.828 SO libspdk_bdev_gpt.so.6.0 00:04:22.828 SO libspdk_blobfs_bdev.so.6.0 00:04:22.828 LIB libspdk_bdev_passthru.a 00:04:22.828 LIB libspdk_bdev_error.a 00:04:22.828 SO libspdk_bdev_null.so.6.0 00:04:22.828 LIB libspdk_bdev_ftl.a 00:04:22.828 LIB libspdk_bdev_malloc.a 00:04:22.828 SO libspdk_bdev_passthru.so.6.0 00:04:22.828 SYMLINK libspdk_bdev_split.so 00:04:22.828 LIB libspdk_bdev_zone_block.a 00:04:22.828 SO libspdk_bdev_error.so.6.0 00:04:22.828 SYMLINK libspdk_blobfs_bdev.so 00:04:22.828 LIB libspdk_bdev_iscsi.a 00:04:22.828 SYMLINK libspdk_bdev_gpt.so 00:04:22.828 SO libspdk_bdev_malloc.so.6.0 00:04:23.087 SO libspdk_bdev_ftl.so.6.0 00:04:23.087 LIB libspdk_bdev_aio.a 00:04:23.087 LIB libspdk_bdev_delay.a 00:04:23.087 SO libspdk_bdev_iscsi.so.6.0 00:04:23.087 SYMLINK libspdk_bdev_null.so 00:04:23.087 SYMLINK libspdk_bdev_passthru.so 00:04:23.087 SO libspdk_bdev_zone_block.so.6.0 00:04:23.087 SO libspdk_bdev_aio.so.6.0 00:04:23.087 SYMLINK libspdk_bdev_error.so 00:04:23.087 SO libspdk_bdev_delay.so.6.0 00:04:23.087 SYMLINK libspdk_bdev_ftl.so 00:04:23.087 SYMLINK libspdk_bdev_malloc.so 00:04:23.087 SYMLINK libspdk_bdev_iscsi.so 00:04:23.087 SYMLINK libspdk_bdev_zone_block.so 00:04:23.087 LIB libspdk_bdev_lvol.a 00:04:23.087 SYMLINK libspdk_bdev_aio.so 00:04:23.087 LIB libspdk_bdev_virtio.a 00:04:23.087 SYMLINK libspdk_bdev_delay.so 00:04:23.087 SO libspdk_bdev_lvol.so.6.0 00:04:23.087 SO libspdk_bdev_virtio.so.6.0 00:04:23.087 SYMLINK libspdk_bdev_lvol.so 00:04:23.087 SYMLINK libspdk_bdev_virtio.so 00:04:23.346 LIB libspdk_bdev_raid.a 00:04:23.346 SO libspdk_bdev_raid.so.6.0 00:04:23.604 SYMLINK libspdk_bdev_raid.so 00:04:24.540 LIB libspdk_bdev_nvme.a 00:04:24.540 SO libspdk_bdev_nvme.so.7.1 00:04:24.540 SYMLINK libspdk_bdev_nvme.so 00:04:25.477 CC module/event/subsystems/sock/sock.o 00:04:25.477 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:25.477 CC module/event/subsystems/iobuf/iobuf.o 00:04:25.477 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:25.477 CC module/event/subsystems/vmd/vmd.o 00:04:25.477 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:25.477 CC module/event/subsystems/scheduler/scheduler.o 00:04:25.477 CC module/event/subsystems/fsdev/fsdev.o 00:04:25.477 CC module/event/subsystems/keyring/keyring.o 00:04:25.477 LIB libspdk_event_sock.a 00:04:25.477 LIB libspdk_event_vhost_blk.a 00:04:25.477 LIB libspdk_event_keyring.a 00:04:25.477 LIB libspdk_event_iobuf.a 00:04:25.477 LIB libspdk_event_fsdev.a 00:04:25.477 LIB libspdk_event_vmd.a 00:04:25.477 LIB libspdk_event_scheduler.a 00:04:25.477 SO libspdk_event_vhost_blk.so.3.0 00:04:25.477 SO libspdk_event_sock.so.5.0 00:04:25.477 SO libspdk_event_keyring.so.1.0 00:04:25.477 SO libspdk_event_iobuf.so.3.0 00:04:25.477 SO libspdk_event_fsdev.so.1.0 00:04:25.477 SO libspdk_event_scheduler.so.4.0 00:04:25.477 SO libspdk_event_vmd.so.6.0 00:04:25.477 SYMLINK libspdk_event_vhost_blk.so 00:04:25.477 SYMLINK libspdk_event_sock.so 00:04:25.477 SYMLINK libspdk_event_keyring.so 00:04:25.477 SYMLINK libspdk_event_iobuf.so 00:04:25.477 SYMLINK libspdk_event_fsdev.so 00:04:25.477 SYMLINK libspdk_event_scheduler.so 00:04:25.477 SYMLINK libspdk_event_vmd.so 00:04:26.043 CC module/event/subsystems/accel/accel.o 00:04:26.043 LIB libspdk_event_accel.a 00:04:26.043 SO libspdk_event_accel.so.6.0 00:04:26.043 SYMLINK libspdk_event_accel.so 00:04:26.611 CC module/event/subsystems/bdev/bdev.o 00:04:26.611 LIB libspdk_event_bdev.a 00:04:26.611 SO libspdk_event_bdev.so.6.0 00:04:26.869 SYMLINK libspdk_event_bdev.so 00:04:27.127 CC module/event/subsystems/nbd/nbd.o 00:04:27.127 CC module/event/subsystems/scsi/scsi.o 00:04:27.127 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:27.127 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:27.127 CC module/event/subsystems/ublk/ublk.o 00:04:27.386 LIB libspdk_event_nbd.a 00:04:27.386 LIB libspdk_event_ublk.a 00:04:27.386 LIB libspdk_event_scsi.a 00:04:27.386 SO libspdk_event_nbd.so.6.0 00:04:27.386 SO libspdk_event_scsi.so.6.0 00:04:27.386 SO libspdk_event_ublk.so.3.0 00:04:27.386 LIB libspdk_event_nvmf.a 00:04:27.386 SYMLINK libspdk_event_nbd.so 00:04:27.386 SYMLINK libspdk_event_scsi.so 00:04:27.386 SYMLINK libspdk_event_ublk.so 00:04:27.386 SO libspdk_event_nvmf.so.6.0 00:04:27.386 SYMLINK libspdk_event_nvmf.so 00:04:27.645 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:27.645 CC module/event/subsystems/iscsi/iscsi.o 00:04:27.903 LIB libspdk_event_vhost_scsi.a 00:04:27.903 SO libspdk_event_vhost_scsi.so.3.0 00:04:27.903 LIB libspdk_event_iscsi.a 00:04:27.903 SO libspdk_event_iscsi.so.6.0 00:04:27.903 SYMLINK libspdk_event_vhost_scsi.so 00:04:27.903 SYMLINK libspdk_event_iscsi.so 00:04:28.161 SO libspdk.so.6.0 00:04:28.161 SYMLINK libspdk.so 00:04:28.419 CC test/rpc_client/rpc_client_test.o 00:04:28.419 CC app/spdk_nvme_identify/identify.o 00:04:28.685 TEST_HEADER include/spdk/accel_module.h 00:04:28.685 TEST_HEADER include/spdk/accel.h 00:04:28.685 TEST_HEADER include/spdk/assert.h 00:04:28.685 TEST_HEADER include/spdk/base64.h 00:04:28.685 TEST_HEADER include/spdk/barrier.h 00:04:28.685 TEST_HEADER include/spdk/bdev.h 00:04:28.685 CC app/spdk_top/spdk_top.o 00:04:28.685 CXX app/trace/trace.o 00:04:28.685 TEST_HEADER include/spdk/bdev_zone.h 00:04:28.685 TEST_HEADER include/spdk/bit_array.h 00:04:28.685 TEST_HEADER include/spdk/bdev_module.h 00:04:28.685 TEST_HEADER include/spdk/blob_bdev.h 00:04:28.685 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.685 TEST_HEADER include/spdk/bit_pool.h 00:04:28.685 TEST_HEADER include/spdk/blob.h 00:04:28.685 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:28.685 TEST_HEADER include/spdk/config.h 00:04:28.685 TEST_HEADER include/spdk/blobfs.h 00:04:28.685 TEST_HEADER include/spdk/conf.h 00:04:28.685 CC app/spdk_nvme_perf/perf.o 00:04:28.685 CC app/spdk_lspci/spdk_lspci.o 00:04:28.685 TEST_HEADER include/spdk/crc64.h 00:04:28.685 TEST_HEADER include/spdk/cpuset.h 00:04:28.685 TEST_HEADER include/spdk/crc32.h 00:04:28.685 TEST_HEADER include/spdk/endian.h 00:04:28.685 TEST_HEADER include/spdk/crc16.h 00:04:28.685 TEST_HEADER include/spdk/dma.h 00:04:28.685 TEST_HEADER include/spdk/env.h 00:04:28.685 TEST_HEADER include/spdk/dif.h 00:04:28.685 TEST_HEADER include/spdk/event.h 00:04:28.685 TEST_HEADER include/spdk/env_dpdk.h 00:04:28.685 TEST_HEADER include/spdk/fd_group.h 00:04:28.685 CC app/trace_record/trace_record.o 00:04:28.685 TEST_HEADER include/spdk/file.h 00:04:28.685 TEST_HEADER include/spdk/fsdev.h 00:04:28.685 TEST_HEADER include/spdk/fd.h 00:04:28.685 TEST_HEADER include/spdk/fsdev_module.h 00:04:28.685 TEST_HEADER include/spdk/ftl.h 00:04:28.685 TEST_HEADER include/spdk/gpt_spec.h 00:04:28.685 TEST_HEADER include/spdk/hexlify.h 00:04:28.685 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:28.685 TEST_HEADER include/spdk/idxd.h 00:04:28.685 TEST_HEADER include/spdk/idxd_spec.h 00:04:28.685 TEST_HEADER include/spdk/histogram_data.h 00:04:28.685 TEST_HEADER include/spdk/init.h 00:04:28.685 TEST_HEADER include/spdk/ioat.h 00:04:28.685 TEST_HEADER include/spdk/iscsi_spec.h 00:04:28.685 TEST_HEADER include/spdk/ioat_spec.h 00:04:28.685 TEST_HEADER include/spdk/json.h 00:04:28.685 TEST_HEADER include/spdk/keyring.h 00:04:28.685 TEST_HEADER include/spdk/jsonrpc.h 00:04:28.685 TEST_HEADER include/spdk/keyring_module.h 00:04:28.685 TEST_HEADER include/spdk/likely.h 00:04:28.685 TEST_HEADER include/spdk/log.h 00:04:28.685 TEST_HEADER include/spdk/lvol.h 00:04:28.685 TEST_HEADER include/spdk/md5.h 00:04:28.685 CC app/nvmf_tgt/nvmf_main.o 00:04:28.685 TEST_HEADER include/spdk/mmio.h 00:04:28.685 TEST_HEADER include/spdk/nbd.h 00:04:28.685 TEST_HEADER include/spdk/memory.h 00:04:28.685 TEST_HEADER include/spdk/net.h 00:04:28.685 TEST_HEADER include/spdk/notify.h 00:04:28.685 TEST_HEADER include/spdk/nvme.h 00:04:28.685 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:28.685 TEST_HEADER include/spdk/nvme_intel.h 00:04:28.685 TEST_HEADER include/spdk/nvme_spec.h 00:04:28.685 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:28.685 TEST_HEADER include/spdk/nvme_zns.h 00:04:28.685 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:28.685 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:28.685 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.685 TEST_HEADER include/spdk/nvmf_spec.h 00:04:28.685 TEST_HEADER include/spdk/nvmf.h 00:04:28.685 TEST_HEADER include/spdk/opal.h 00:04:28.685 TEST_HEADER include/spdk/nvmf_transport.h 00:04:28.685 TEST_HEADER include/spdk/pipe.h 00:04:28.685 TEST_HEADER include/spdk/pci_ids.h 00:04:28.685 TEST_HEADER include/spdk/opal_spec.h 00:04:28.685 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.685 TEST_HEADER include/spdk/reduce.h 00:04:28.685 TEST_HEADER include/spdk/rpc.h 00:04:28.685 TEST_HEADER include/spdk/queue.h 00:04:28.685 TEST_HEADER include/spdk/scsi.h 00:04:28.685 TEST_HEADER include/spdk/scsi_spec.h 00:04:28.685 TEST_HEADER include/spdk/stdinc.h 00:04:28.685 TEST_HEADER include/spdk/string.h 00:04:28.685 TEST_HEADER include/spdk/scheduler.h 00:04:28.685 TEST_HEADER include/spdk/sock.h 00:04:28.685 TEST_HEADER include/spdk/thread.h 00:04:28.685 TEST_HEADER include/spdk/trace.h 00:04:28.685 CC app/spdk_dd/spdk_dd.o 00:04:28.685 TEST_HEADER include/spdk/trace_parser.h 00:04:28.685 TEST_HEADER include/spdk/tree.h 00:04:28.685 TEST_HEADER include/spdk/ublk.h 00:04:28.685 TEST_HEADER include/spdk/util.h 00:04:28.685 TEST_HEADER include/spdk/version.h 00:04:28.685 TEST_HEADER include/spdk/uuid.h 00:04:28.686 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:28.686 TEST_HEADER include/spdk/vhost.h 00:04:28.686 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:28.686 TEST_HEADER include/spdk/vmd.h 00:04:28.686 TEST_HEADER include/spdk/xor.h 00:04:28.686 TEST_HEADER include/spdk/zipf.h 00:04:28.686 CXX test/cpp_headers/accel.o 00:04:28.686 CXX test/cpp_headers/accel_module.o 00:04:28.686 CXX test/cpp_headers/assert.o 00:04:28.686 CXX test/cpp_headers/barrier.o 00:04:28.686 CC app/spdk_tgt/spdk_tgt.o 00:04:28.686 CXX test/cpp_headers/base64.o 00:04:28.686 CXX test/cpp_headers/bdev.o 00:04:28.686 CXX test/cpp_headers/bdev_module.o 00:04:28.686 CXX test/cpp_headers/bit_array.o 00:04:28.686 CXX test/cpp_headers/bit_pool.o 00:04:28.686 CXX test/cpp_headers/blob_bdev.o 00:04:28.686 CXX test/cpp_headers/bdev_zone.o 00:04:28.686 CXX test/cpp_headers/blobfs.o 00:04:28.686 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.686 CXX test/cpp_headers/blob.o 00:04:28.686 CXX test/cpp_headers/cpuset.o 00:04:28.686 CXX test/cpp_headers/config.o 00:04:28.686 CXX test/cpp_headers/conf.o 00:04:28.686 CXX test/cpp_headers/crc32.o 00:04:28.686 CXX test/cpp_headers/crc16.o 00:04:28.686 CXX test/cpp_headers/crc64.o 00:04:28.686 CXX test/cpp_headers/dif.o 00:04:28.686 CXX test/cpp_headers/dma.o 00:04:28.686 CXX test/cpp_headers/endian.o 00:04:28.686 CXX test/cpp_headers/env_dpdk.o 00:04:28.686 CXX test/cpp_headers/event.o 00:04:28.686 CXX test/cpp_headers/env.o 00:04:28.686 CXX test/cpp_headers/fd.o 00:04:28.686 CXX test/cpp_headers/fd_group.o 00:04:28.686 CXX test/cpp_headers/file.o 00:04:28.686 CXX test/cpp_headers/fsdev_module.o 00:04:28.686 CXX test/cpp_headers/fsdev.o 00:04:28.686 CXX test/cpp_headers/ftl.o 00:04:28.686 CXX test/cpp_headers/gpt_spec.o 00:04:28.686 CXX test/cpp_headers/fuse_dispatcher.o 00:04:28.686 CXX test/cpp_headers/histogram_data.o 00:04:28.686 CXX test/cpp_headers/idxd.o 00:04:28.686 CXX test/cpp_headers/hexlify.o 00:04:28.686 CXX test/cpp_headers/idxd_spec.o 00:04:28.686 CXX test/cpp_headers/init.o 00:04:28.686 CXX test/cpp_headers/ioat.o 00:04:28.686 CXX test/cpp_headers/json.o 00:04:28.686 CXX test/cpp_headers/iscsi_spec.o 00:04:28.686 CXX test/cpp_headers/ioat_spec.o 00:04:28.686 CXX test/cpp_headers/keyring.o 00:04:28.686 CXX test/cpp_headers/likely.o 00:04:28.686 CXX test/cpp_headers/jsonrpc.o 00:04:28.686 CXX test/cpp_headers/keyring_module.o 00:04:28.686 CXX test/cpp_headers/log.o 00:04:28.686 CXX test/cpp_headers/lvol.o 00:04:28.686 CXX test/cpp_headers/md5.o 00:04:28.686 CXX test/cpp_headers/mmio.o 00:04:28.686 CXX test/cpp_headers/memory.o 00:04:28.686 CXX test/cpp_headers/nbd.o 00:04:28.686 CXX test/cpp_headers/net.o 00:04:28.686 CXX test/cpp_headers/nvme.o 00:04:28.686 CXX test/cpp_headers/notify.o 00:04:28.686 CXX test/cpp_headers/nvme_ocssd.o 00:04:28.686 CXX test/cpp_headers/nvme_intel.o 00:04:28.686 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:28.686 CXX test/cpp_headers/nvme_spec.o 00:04:28.686 CXX test/cpp_headers/nvmf_cmd.o 00:04:28.686 CXX test/cpp_headers/nvme_zns.o 00:04:28.686 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:28.686 CXX test/cpp_headers/nvmf.o 00:04:28.686 CXX test/cpp_headers/nvmf_spec.o 00:04:28.686 CXX test/cpp_headers/nvmf_transport.o 00:04:28.686 CXX test/cpp_headers/opal.o 00:04:28.686 CXX test/cpp_headers/opal_spec.o 00:04:28.686 CXX test/cpp_headers/pci_ids.o 00:04:28.686 CXX test/cpp_headers/pipe.o 00:04:28.686 CXX test/cpp_headers/queue.o 00:04:28.686 CXX test/cpp_headers/reduce.o 00:04:28.686 CC test/thread/poller_perf/poller_perf.o 00:04:28.686 CXX test/cpp_headers/rpc.o 00:04:28.686 CXX test/cpp_headers/scsi.o 00:04:28.686 CXX test/cpp_headers/scheduler.o 00:04:28.686 CXX test/cpp_headers/scsi_spec.o 00:04:28.686 CXX test/cpp_headers/sock.o 00:04:28.686 CXX test/cpp_headers/stdinc.o 00:04:28.686 CXX test/cpp_headers/string.o 00:04:28.686 CXX test/cpp_headers/thread.o 00:04:28.686 CXX test/cpp_headers/trace.o 00:04:28.686 CC test/app/histogram_perf/histogram_perf.o 00:04:28.686 CXX test/cpp_headers/tree.o 00:04:28.686 CXX test/cpp_headers/trace_parser.o 00:04:28.686 CC test/app/jsoncat/jsoncat.o 00:04:28.686 CC test/app/stub/stub.o 00:04:28.686 CC test/app/bdev_svc/bdev_svc.o 00:04:28.961 CC test/env/pci/pci_ut.o 00:04:28.961 CC test/env/memory/memory_ut.o 00:04:28.961 CC test/dma/test_dma/test_dma.o 00:04:28.961 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.961 CXX test/cpp_headers/ublk.o 00:04:28.961 CC examples/util/zipf/zipf.o 00:04:28.961 CC test/env/vtophys/vtophys.o 00:04:28.961 CC app/fio/nvme/fio_plugin.o 00:04:28.961 CC examples/ioat/perf/perf.o 00:04:28.961 CC examples/ioat/verify/verify.o 00:04:28.961 CC app/fio/bdev/fio_plugin.o 00:04:28.961 LINK spdk_lspci 00:04:29.227 LINK spdk_nvme_discover 00:04:29.227 LINK nvmf_tgt 00:04:29.227 LINK rpc_client_test 00:04:29.227 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:29.489 CC test/env/mem_callbacks/mem_callbacks.o 00:04:29.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:29.489 LINK histogram_perf 00:04:29.489 LINK interrupt_tgt 00:04:29.489 LINK poller_perf 00:04:29.489 LINK spdk_trace_record 00:04:29.489 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:29.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:29.489 CXX test/cpp_headers/util.o 00:04:29.489 CXX test/cpp_headers/uuid.o 00:04:29.489 CXX test/cpp_headers/version.o 00:04:29.489 CXX test/cpp_headers/vfio_user_pci.o 00:04:29.489 LINK jsoncat 00:04:29.489 CXX test/cpp_headers/vfio_user_spec.o 00:04:29.489 CXX test/cpp_headers/vhost.o 00:04:29.489 LINK vtophys 00:04:29.489 CXX test/cpp_headers/vmd.o 00:04:29.489 CXX test/cpp_headers/xor.o 00:04:29.489 CXX test/cpp_headers/zipf.o 00:04:29.489 LINK env_dpdk_post_init 00:04:29.489 LINK zipf 00:04:29.489 LINK spdk_tgt 00:04:29.489 LINK stub 00:04:29.489 LINK bdev_svc 00:04:29.489 LINK iscsi_tgt 00:04:29.489 LINK verify 00:04:29.489 LINK ioat_perf 00:04:29.748 LINK spdk_trace 00:04:29.748 LINK mem_callbacks 00:04:29.748 LINK pci_ut 00:04:29.748 LINK spdk_dd 00:04:29.748 LINK test_dma 00:04:29.748 LINK spdk_nvme 00:04:29.748 LINK vhost_fuzz 00:04:29.748 LINK spdk_bdev 00:04:29.748 LINK nvme_fuzz 00:04:30.007 LINK spdk_nvme_perf 00:04:30.007 LINK spdk_nvme_identify 00:04:30.007 CC examples/idxd/perf/perf.o 00:04:30.007 CC test/event/event_perf/event_perf.o 00:04:30.007 CC test/event/reactor_perf/reactor_perf.o 00:04:30.007 CC test/event/reactor/reactor.o 00:04:30.007 CC examples/sock/hello_world/hello_sock.o 00:04:30.007 CC examples/vmd/led/led.o 00:04:30.007 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.007 CC examples/thread/thread/thread_ex.o 00:04:30.007 CC app/vhost/vhost.o 00:04:30.007 LINK spdk_top 00:04:30.007 CC test/event/app_repeat/app_repeat.o 00:04:30.007 CC test/event/scheduler/scheduler.o 00:04:30.007 LINK memory_ut 00:04:30.007 LINK reactor 00:04:30.007 LINK reactor_perf 00:04:30.266 LINK lsvmd 00:04:30.266 LINK event_perf 00:04:30.266 LINK led 00:04:30.266 LINK app_repeat 00:04:30.266 LINK vhost 00:04:30.266 LINK hello_sock 00:04:30.266 LINK idxd_perf 00:04:30.266 LINK thread 00:04:30.266 LINK scheduler 00:04:30.266 CC test/nvme/aer/aer.o 00:04:30.266 CC test/nvme/startup/startup.o 00:04:30.266 CC test/blobfs/mkfs/mkfs.o 00:04:30.266 CC test/nvme/overhead/overhead.o 00:04:30.266 CC test/nvme/boot_partition/boot_partition.o 00:04:30.266 CC test/nvme/reserve/reserve.o 00:04:30.266 CC test/nvme/cuse/cuse.o 00:04:30.266 CC test/nvme/compliance/nvme_compliance.o 00:04:30.266 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.266 CC test/nvme/connect_stress/connect_stress.o 00:04:30.266 CC test/nvme/err_injection/err_injection.o 00:04:30.266 CC test/nvme/reset/reset.o 00:04:30.266 CC test/nvme/simple_copy/simple_copy.o 00:04:30.266 CC test/nvme/fdp/fdp.o 00:04:30.266 CC test/nvme/e2edp/nvme_dp.o 00:04:30.266 CC test/nvme/sgl/sgl.o 00:04:30.266 CC test/nvme/fused_ordering/fused_ordering.o 00:04:30.266 CC test/accel/dif/dif.o 00:04:30.525 CC test/lvol/esnap/esnap.o 00:04:30.525 LINK startup 00:04:30.525 LINK boot_partition 00:04:30.525 LINK doorbell_aers 00:04:30.525 LINK mkfs 00:04:30.525 LINK reserve 00:04:30.525 LINK connect_stress 00:04:30.525 LINK err_injection 00:04:30.525 LINK fused_ordering 00:04:30.525 LINK simple_copy 00:04:30.525 LINK aer 00:04:30.525 LINK reset 00:04:30.525 LINK sgl 00:04:30.525 LINK overhead 00:04:30.525 LINK nvme_dp 00:04:30.525 LINK nvme_compliance 00:04:30.525 LINK fdp 00:04:30.785 CC examples/nvme/abort/abort.o 00:04:30.785 CC examples/nvme/hotplug/hotplug.o 00:04:30.785 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.785 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.785 CC examples/nvme/arbitration/arbitration.o 00:04:30.785 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:30.785 CC examples/nvme/reconnect/reconnect.o 00:04:30.785 CC examples/nvme/hello_world/hello_world.o 00:04:30.785 CC examples/accel/perf/accel_perf.o 00:04:30.785 CC examples/blob/hello_world/hello_blob.o 00:04:30.785 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:30.785 CC examples/blob/cli/blobcli.o 00:04:30.785 LINK iscsi_fuzz 00:04:30.785 LINK pmr_persistence 00:04:30.785 LINK cmb_copy 00:04:30.785 LINK dif 00:04:30.785 LINK hello_world 00:04:31.044 LINK hotplug 00:04:31.044 LINK arbitration 00:04:31.044 LINK abort 00:04:31.044 LINK reconnect 00:04:31.044 LINK hello_blob 00:04:31.044 LINK nvme_manage 00:04:31.044 LINK hello_fsdev 00:04:31.303 LINK accel_perf 00:04:31.303 LINK blobcli 00:04:31.303 LINK cuse 00:04:31.562 CC test/bdev/bdevio/bdevio.o 00:04:31.821 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.821 CC examples/bdev/bdevperf/bdevperf.o 00:04:31.821 LINK bdevio 00:04:32.079 LINK hello_bdev 00:04:32.338 LINK bdevperf 00:04:32.906 CC examples/nvmf/nvmf/nvmf.o 00:04:33.165 LINK nvmf 00:04:34.104 LINK esnap 00:04:34.363 00:04:34.363 real 0m54.031s 00:04:34.363 user 6m9.950s 00:04:34.363 sys 2m57.537s 00:04:34.363 15:23:11 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:34.363 15:23:11 make -- common/autotest_common.sh@10 -- $ set +x 00:04:34.363 ************************************ 00:04:34.363 END TEST make 00:04:34.363 ************************************ 00:04:34.363 15:23:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:34.363 15:23:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:34.363 15:23:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:34.363 15:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.363 15:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:34.363 15:23:12 -- pm/common@44 -- $ pid=1991010 00:04:34.363 15:23:12 -- pm/common@50 -- $ kill -TERM 1991010 00:04:34.364 15:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.364 15:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:34.364 15:23:12 -- pm/common@44 -- $ pid=1991012 00:04:34.364 15:23:12 -- pm/common@50 -- $ kill -TERM 1991012 00:04:34.364 15:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.364 15:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:34.364 15:23:12 -- pm/common@44 -- $ pid=1991014 00:04:34.364 15:23:12 -- pm/common@50 -- $ kill -TERM 1991014 00:04:34.364 15:23:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.364 15:23:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:34.364 15:23:12 -- pm/common@44 -- $ pid=1991038 00:04:34.364 15:23:12 -- pm/common@50 -- $ sudo -E kill -TERM 1991038 00:04:34.364 15:23:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:34.364 15:23:12 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:34.624 15:23:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.624 15:23:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.624 15:23:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.624 15:23:12 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.624 15:23:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.624 15:23:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.624 15:23:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.624 15:23:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.624 15:23:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.624 15:23:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.624 15:23:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.624 15:23:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.624 15:23:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.624 15:23:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.624 15:23:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.624 15:23:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:34.624 15:23:12 -- scripts/common.sh@345 -- # : 1 00:04:34.624 15:23:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.624 15:23:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.624 15:23:12 -- scripts/common.sh@365 -- # decimal 1 00:04:34.624 15:23:12 -- scripts/common.sh@353 -- # local d=1 00:04:34.624 15:23:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.624 15:23:12 -- scripts/common.sh@355 -- # echo 1 00:04:34.624 15:23:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.624 15:23:12 -- scripts/common.sh@366 -- # decimal 2 00:04:34.624 15:23:12 -- scripts/common.sh@353 -- # local d=2 00:04:34.624 15:23:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.624 15:23:12 -- scripts/common.sh@355 -- # echo 2 00:04:34.624 15:23:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.624 15:23:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.624 15:23:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.624 15:23:12 -- scripts/common.sh@368 -- # return 0 00:04:34.624 15:23:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.624 15:23:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.624 --rc genhtml_branch_coverage=1 00:04:34.624 --rc genhtml_function_coverage=1 00:04:34.624 --rc genhtml_legend=1 00:04:34.624 --rc geninfo_all_blocks=1 00:04:34.624 --rc geninfo_unexecuted_blocks=1 00:04:34.624 00:04:34.624 ' 00:04:34.624 15:23:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.624 --rc genhtml_branch_coverage=1 00:04:34.624 --rc genhtml_function_coverage=1 00:04:34.624 --rc genhtml_legend=1 00:04:34.624 --rc geninfo_all_blocks=1 00:04:34.624 --rc geninfo_unexecuted_blocks=1 00:04:34.624 00:04:34.624 ' 00:04:34.624 15:23:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.624 --rc genhtml_branch_coverage=1 00:04:34.624 --rc genhtml_function_coverage=1 00:04:34.624 --rc genhtml_legend=1 00:04:34.624 --rc geninfo_all_blocks=1 00:04:34.624 --rc geninfo_unexecuted_blocks=1 00:04:34.624 00:04:34.624 ' 00:04:34.624 15:23:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.624 --rc genhtml_branch_coverage=1 00:04:34.624 --rc genhtml_function_coverage=1 00:04:34.624 --rc genhtml_legend=1 00:04:34.624 --rc geninfo_all_blocks=1 00:04:34.624 --rc geninfo_unexecuted_blocks=1 00:04:34.624 00:04:34.624 ' 00:04:34.624 15:23:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:34.624 15:23:12 -- nvmf/common.sh@7 -- # uname -s 00:04:34.624 15:23:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.624 15:23:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.624 15:23:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.624 15:23:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.624 15:23:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.624 15:23:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.624 15:23:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.624 15:23:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.624 15:23:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.624 15:23:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.624 15:23:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:34.624 15:23:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:34.624 15:23:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.624 15:23:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.624 15:23:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:34.624 15:23:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.624 15:23:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:34.624 15:23:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.624 15:23:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.624 15:23:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.624 15:23:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.624 15:23:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.624 15:23:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.624 15:23:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.624 15:23:12 -- paths/export.sh@5 -- # export PATH 00:04:34.624 15:23:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.624 15:23:12 -- nvmf/common.sh@51 -- # : 0 00:04:34.624 15:23:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.624 15:23:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.624 15:23:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.624 15:23:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.624 15:23:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.624 15:23:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.624 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.624 15:23:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.624 15:23:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.624 15:23:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.624 15:23:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:34.624 15:23:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:34.624 15:23:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:34.624 15:23:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:34.624 15:23:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:34.624 15:23:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:34.624 15:23:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:34.624 15:23:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:34.625 15:23:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:34.625 15:23:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:34.625 15:23:12 -- spdk/autotest.sh@48 -- # udevadm_pid=2069352 00:04:34.625 15:23:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:34.625 15:23:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:34.625 15:23:12 -- pm/common@17 -- # local monitor 00:04:34.625 15:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.625 15:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.625 15:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.625 15:23:12 -- pm/common@21 -- # date +%s 00:04:34.625 15:23:12 -- pm/common@21 -- # date +%s 00:04:34.625 15:23:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.625 15:23:12 -- pm/common@25 -- # sleep 1 00:04:34.625 15:23:12 -- pm/common@21 -- # date +%s 00:04:34.625 15:23:12 -- pm/common@21 -- # date +%s 00:04:34.625 15:23:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730643792 00:04:34.625 15:23:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730643792 00:04:34.625 15:23:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730643792 00:04:34.625 15:23:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730643792 00:04:34.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730643792_collect-vmstat.pm.log 00:04:34.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730643792_collect-cpu-load.pm.log 00:04:34.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730643792_collect-cpu-temp.pm.log 00:04:34.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730643792_collect-bmc-pm.bmc.pm.log 00:04:35.563 15:23:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:35.563 15:23:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:35.563 15:23:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.563 15:23:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.563 15:23:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:35.563 15:23:13 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:35.563 15:23:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.822 15:23:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:35.822 15:23:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:35.822 15:23:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:35.822 15:23:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:35.822 15:23:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:35.822 15:23:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:35.822 15:23:13 -- common/autotest_common.sh@1455 -- # uname 00:04:35.822 15:23:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:35.822 15:23:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:35.822 15:23:13 -- common/autotest_common.sh@1475 -- # uname 00:04:35.822 15:23:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:35.822 15:23:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:35.822 15:23:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:35.822 lcov: LCOV version 1.15 00:04:35.823 15:23:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:53.917 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.917 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.489 15:23:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:00.489 15:23:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.489 15:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.489 15:23:37 -- spdk/autotest.sh@78 -- # rm -f 00:05:00.489 15:23:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.780 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:03.780 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:05:03.780 15:23:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:03.780 15:23:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:03.780 15:23:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:03.780 15:23:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:03.780 15:23:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.780 15:23:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:03.780 15:23:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:03.780 15:23:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.780 15:23:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.780 15:23:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:03.780 15:23:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.780 15:23:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.780 15:23:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:03.780 15:23:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:03.780 15:23:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:03.780 No valid GPT data, bailing 00:05:03.780 15:23:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.780 15:23:41 -- scripts/common.sh@394 -- # pt= 00:05:03.780 15:23:41 -- scripts/common.sh@395 -- # return 1 00:05:03.780 15:23:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:03.780 1+0 records in 00:05:03.780 1+0 records out 00:05:03.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727069 s, 144 MB/s 00:05:03.780 15:23:41 -- spdk/autotest.sh@105 -- # sync 00:05:03.780 15:23:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:03.780 15:23:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:03.780 15:23:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:10.453 15:23:47 -- spdk/autotest.sh@111 -- # uname -s 00:05:10.453 15:23:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:10.453 15:23:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:10.453 15:23:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:13.743 Hugepages 00:05:13.743 node hugesize free / total 00:05:13.743 node0 1048576kB 0 / 0 00:05:13.743 node0 2048kB 0 / 0 00:05:13.743 node1 1048576kB 0 / 0 00:05:13.743 node1 2048kB 0 / 0 00:05:13.743 00:05:13.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.743 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:13.743 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:14.002 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:14.002 15:23:51 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.002 15:23:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.002 15:23:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.002 15:23:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:17.290 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.290 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.825 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:19.825 15:23:57 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:20.394 15:23:58 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:20.394 15:23:58 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:20.394 15:23:58 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.394 15:23:58 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:20.394 15:23:58 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:20.394 15:23:58 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:20.394 15:23:58 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.394 15:23:58 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:20.394 15:23:58 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:20.653 15:23:58 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:20.653 15:23:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:05:20.653 15:23:58 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.943 Waiting for block devices as requested 00:05:23.943 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:23.943 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.943 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.943 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.943 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:24.202 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:24.202 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:24.202 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:24.461 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:24.461 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:24.461 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:24.721 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:24.721 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:24.721 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:24.980 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:24.980 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:24.980 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:25.239 15:24:02 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:25.239 15:24:02 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:05:25.239 15:24:02 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:25.239 15:24:02 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:25.239 15:24:02 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:25.239 15:24:02 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:25.239 15:24:02 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:25.239 15:24:02 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:25.239 15:24:02 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:25.239 15:24:02 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:25.239 15:24:02 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:25.239 15:24:02 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:25.239 15:24:02 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:25.239 15:24:02 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:25.239 15:24:02 -- common/autotest_common.sh@1541 -- # continue 00:05:25.239 15:24:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.239 15:24:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.239 15:24:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.239 15:24:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.239 15:24:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.239 15:24:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.239 15:24:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:29.430 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:29.430 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:30.809 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:31.068 15:24:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:31.068 15:24:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.068 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 15:24:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:31.068 15:24:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:31.068 15:24:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.068 15:24:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:31.068 15:24:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:31.068 15:24:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:31.068 15:24:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:31.068 15:24:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:31.068 15:24:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:31.068 15:24:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:31.068 15:24:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.068 15:24:08 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.068 15:24:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:31.068 15:24:08 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:31.068 15:24:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:05:31.068 15:24:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:31.068 15:24:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:31.068 15:24:08 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:31.068 15:24:08 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:31.068 15:24:08 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:31.068 15:24:08 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:31.068 15:24:08 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:05:31.068 15:24:08 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:05:31.068 15:24:08 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2085711 00:05:31.068 15:24:08 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.068 15:24:08 -- common/autotest_common.sh@1583 -- # waitforlisten 2085711 00:05:31.068 15:24:08 -- common/autotest_common.sh@833 -- # '[' -z 2085711 ']' 00:05:31.068 15:24:08 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.068 15:24:08 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.068 15:24:08 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.068 15:24:08 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.068 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 [2024-11-03 15:24:08.837606] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:31.068 [2024-11-03 15:24:08.837662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085711 ] 00:05:31.328 [2024-11-03 15:24:08.915426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.328 [2024-11-03 15:24:08.938069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.588 15:24:09 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.588 15:24:09 -- common/autotest_common.sh@866 -- # return 0 00:05:31.588 15:24:09 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:31.588 15:24:09 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:31.588 15:24:09 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:34.879 nvme0n1 00:05:34.879 15:24:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:34.879 [2024-11-03 15:24:12.340392] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:34.879 request: 00:05:34.879 { 00:05:34.879 "nvme_ctrlr_name": "nvme0", 00:05:34.879 "password": "test", 00:05:34.879 "method": "bdev_nvme_opal_revert", 00:05:34.879 "req_id": 1 00:05:34.879 } 00:05:34.879 Got JSON-RPC error response 00:05:34.879 response: 00:05:34.879 { 00:05:34.879 "code": -32602, 00:05:34.879 "message": "Invalid parameters" 00:05:34.879 } 00:05:34.879 15:24:12 -- common/autotest_common.sh@1589 -- # true 00:05:34.879 15:24:12 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:34.879 15:24:12 -- common/autotest_common.sh@1593 -- # killprocess 2085711 00:05:34.879 15:24:12 -- common/autotest_common.sh@952 -- # '[' -z 2085711 ']' 00:05:34.879 15:24:12 -- common/autotest_common.sh@956 -- # kill -0 2085711 00:05:34.879 15:24:12 -- common/autotest_common.sh@957 -- # uname 00:05:34.879 15:24:12 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.879 15:24:12 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2085711 00:05:34.879 15:24:12 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.879 15:24:12 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.879 15:24:12 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2085711' 00:05:34.879 killing process with pid 2085711 00:05:34.879 15:24:12 -- common/autotest_common.sh@971 -- # kill 2085711 00:05:34.879 15:24:12 -- common/autotest_common.sh@976 -- # wait 2085711 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.879 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.880 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.415 15:24:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:37.415 15:24:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:37.415 15:24:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:37.415 15:24:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:37.415 15:24:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:37.415 15:24:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.415 15:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.415 15:24:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:37.415 15:24:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.415 15:24:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.415 15:24:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.415 15:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.415 ************************************ 00:05:37.415 START TEST env 00:05:37.415 ************************************ 00:05:37.415 15:24:14 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.415 * Looking for test storage... 00:05:37.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.415 15:24:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.415 15:24:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.415 15:24:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.415 15:24:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.415 15:24:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.415 15:24:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.415 15:24:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.415 15:24:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.415 15:24:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.415 15:24:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.415 15:24:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.415 15:24:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:37.415 15:24:15 env -- scripts/common.sh@345 -- # : 1 00:05:37.415 15:24:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.415 15:24:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.415 15:24:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:37.415 15:24:15 env -- scripts/common.sh@353 -- # local d=1 00:05:37.415 15:24:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.415 15:24:15 env -- scripts/common.sh@355 -- # echo 1 00:05:37.415 15:24:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.415 15:24:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:37.415 15:24:15 env -- scripts/common.sh@353 -- # local d=2 00:05:37.415 15:24:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.415 15:24:15 env -- scripts/common.sh@355 -- # echo 2 00:05:37.415 15:24:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.415 15:24:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.415 15:24:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.415 15:24:15 env -- scripts/common.sh@368 -- # return 0 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.415 --rc genhtml_branch_coverage=1 00:05:37.415 --rc genhtml_function_coverage=1 00:05:37.415 --rc genhtml_legend=1 00:05:37.415 --rc geninfo_all_blocks=1 00:05:37.415 --rc geninfo_unexecuted_blocks=1 00:05:37.415 00:05:37.415 ' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.415 --rc genhtml_branch_coverage=1 00:05:37.415 --rc genhtml_function_coverage=1 00:05:37.415 --rc genhtml_legend=1 00:05:37.415 --rc geninfo_all_blocks=1 00:05:37.415 --rc geninfo_unexecuted_blocks=1 00:05:37.415 00:05:37.415 ' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.415 --rc genhtml_branch_coverage=1 00:05:37.415 --rc genhtml_function_coverage=1 00:05:37.415 --rc genhtml_legend=1 00:05:37.415 --rc geninfo_all_blocks=1 00:05:37.415 --rc geninfo_unexecuted_blocks=1 00:05:37.415 00:05:37.415 ' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.415 --rc genhtml_branch_coverage=1 00:05:37.415 --rc genhtml_function_coverage=1 00:05:37.415 --rc genhtml_legend=1 00:05:37.415 --rc geninfo_all_blocks=1 00:05:37.415 --rc geninfo_unexecuted_blocks=1 00:05:37.415 00:05:37.415 ' 00:05:37.415 15:24:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.415 15:24:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.415 15:24:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.415 ************************************ 00:05:37.415 START TEST env_memory 00:05:37.415 ************************************ 00:05:37.415 15:24:15 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.415 00:05:37.415 00:05:37.415 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.415 http://cunit.sourceforge.net/ 00:05:37.415 00:05:37.415 00:05:37.415 Suite: memory 00:05:37.415 Test: alloc and free memory map ...[2024-11-03 15:24:15.179261] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.415 passed 00:05:37.415 Test: mem map translation ...[2024-11-03 15:24:15.198245] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.415 [2024-11-03 15:24:15.198261] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.415 [2024-11-03 15:24:15.198298] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.415 [2024-11-03 15:24:15.198307] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.676 passed 00:05:37.676 Test: mem map registration ...[2024-11-03 15:24:15.234556] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:37.676 [2024-11-03 15:24:15.234571] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:37.676 passed 00:05:37.676 Test: mem map adjacent registrations ...passed 00:05:37.676 00:05:37.676 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.676 suites 1 1 n/a 0 0 00:05:37.676 tests 4 4 4 0 0 00:05:37.676 asserts 152 152 152 0 n/a 00:05:37.676 00:05:37.676 Elapsed time = 0.132 seconds 00:05:37.676 00:05:37.676 real 0m0.146s 00:05:37.676 user 0m0.134s 00:05:37.676 sys 0m0.011s 00:05:37.676 15:24:15 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.676 15:24:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.676 ************************************ 00:05:37.676 END TEST env_memory 00:05:37.676 ************************************ 00:05:37.676 15:24:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.676 15:24:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.676 15:24:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.676 15:24:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.676 ************************************ 00:05:37.676 START TEST env_vtophys 00:05:37.676 ************************************ 00:05:37.676 15:24:15 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.676 EAL: lib.eal log level changed from notice to debug 00:05:37.676 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.676 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.676 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.676 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.676 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.676 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.676 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.676 EAL: Detected lcore 7 as core 8 on socket 0 00:05:37.676 EAL: Detected lcore 8 as core 9 on socket 0 00:05:37.676 EAL: Detected lcore 9 as core 10 on socket 0 00:05:37.676 EAL: Detected lcore 10 as core 11 on socket 0 00:05:37.676 EAL: Detected lcore 11 as core 12 on socket 0 00:05:37.676 EAL: Detected lcore 12 as core 13 on socket 0 00:05:37.676 EAL: Detected lcore 13 as core 14 on socket 0 00:05:37.676 EAL: Detected lcore 14 as core 16 on socket 0 00:05:37.676 EAL: Detected lcore 15 as core 17 on socket 0 00:05:37.676 EAL: Detected lcore 16 as core 18 on socket 0 00:05:37.676 EAL: Detected lcore 17 as core 19 on socket 0 00:05:37.676 EAL: Detected lcore 18 as core 20 on socket 0 00:05:37.676 EAL: Detected lcore 19 as core 21 on socket 0 00:05:37.676 EAL: Detected lcore 20 as core 22 on socket 0 00:05:37.676 EAL: Detected lcore 21 as core 24 on socket 0 00:05:37.676 EAL: Detected lcore 22 as core 25 on socket 0 00:05:37.676 EAL: Detected lcore 23 as core 26 on socket 0 00:05:37.676 EAL: Detected lcore 24 as core 27 on socket 0 00:05:37.676 EAL: Detected lcore 25 as core 28 on socket 0 00:05:37.676 EAL: Detected lcore 26 as core 29 on socket 0 00:05:37.676 EAL: Detected lcore 27 as core 30 on socket 0 00:05:37.676 EAL: Detected lcore 28 as core 0 on socket 1 00:05:37.676 EAL: Detected lcore 29 as core 1 on socket 1 00:05:37.676 EAL: Detected lcore 30 as core 2 on socket 1 00:05:37.676 EAL: Detected lcore 31 as core 3 on socket 1 00:05:37.676 EAL: Detected lcore 32 as core 4 on socket 1 00:05:37.676 EAL: Detected lcore 33 as core 5 on socket 1 00:05:37.676 EAL: Detected lcore 34 as core 6 on socket 1 00:05:37.676 EAL: Detected lcore 35 as core 8 on socket 1 00:05:37.676 EAL: Detected lcore 36 as core 9 on socket 1 00:05:37.676 EAL: Detected lcore 37 as core 10 on socket 1 00:05:37.676 EAL: Detected lcore 38 as core 11 on socket 1 00:05:37.676 EAL: Detected lcore 39 as core 12 on socket 1 00:05:37.676 EAL: Detected lcore 40 as core 13 on socket 1 00:05:37.676 EAL: Detected lcore 41 as core 14 on socket 1 00:05:37.676 EAL: Detected lcore 42 as core 16 on socket 1 00:05:37.676 EAL: Detected lcore 43 as core 17 on socket 1 00:05:37.676 EAL: Detected lcore 44 as core 18 on socket 1 00:05:37.676 EAL: Detected lcore 45 as core 19 on socket 1 00:05:37.676 EAL: Detected lcore 46 as core 20 on socket 1 00:05:37.676 EAL: Detected lcore 47 as core 21 on socket 1 00:05:37.676 EAL: Detected lcore 48 as core 22 on socket 1 00:05:37.676 EAL: Detected lcore 49 as core 24 on socket 1 00:05:37.676 EAL: Detected lcore 50 as core 25 on socket 1 00:05:37.676 EAL: Detected lcore 51 as core 26 on socket 1 00:05:37.676 EAL: Detected lcore 52 as core 27 on socket 1 00:05:37.676 EAL: Detected lcore 53 as core 28 on socket 1 00:05:37.676 EAL: Detected lcore 54 as core 29 on socket 1 00:05:37.676 EAL: Detected lcore 55 as core 30 on socket 1 00:05:37.676 EAL: Detected lcore 56 as core 0 on socket 0 00:05:37.676 EAL: Detected lcore 57 as core 1 on socket 0 00:05:37.676 EAL: Detected lcore 58 as core 2 on socket 0 00:05:37.676 EAL: Detected lcore 59 as core 3 on socket 0 00:05:37.676 EAL: Detected lcore 60 as core 4 on socket 0 00:05:37.676 EAL: Detected lcore 61 as core 5 on socket 0 00:05:37.676 EAL: Detected lcore 62 as core 6 on socket 0 00:05:37.676 EAL: Detected lcore 63 as core 8 on socket 0 00:05:37.676 EAL: Detected lcore 64 as core 9 on socket 0 00:05:37.676 EAL: Detected lcore 65 as core 10 on socket 0 00:05:37.676 EAL: Detected lcore 66 as core 11 on socket 0 00:05:37.676 EAL: Detected lcore 67 as core 12 on socket 0 00:05:37.676 EAL: Detected lcore 68 as core 13 on socket 0 00:05:37.676 EAL: Detected lcore 69 as core 14 on socket 0 00:05:37.676 EAL: Detected lcore 70 as core 16 on socket 0 00:05:37.676 EAL: Detected lcore 71 as core 17 on socket 0 00:05:37.676 EAL: Detected lcore 72 as core 18 on socket 0 00:05:37.676 EAL: Detected lcore 73 as core 19 on socket 0 00:05:37.676 EAL: Detected lcore 74 as core 20 on socket 0 00:05:37.676 EAL: Detected lcore 75 as core 21 on socket 0 00:05:37.676 EAL: Detected lcore 76 as core 22 on socket 0 00:05:37.676 EAL: Detected lcore 77 as core 24 on socket 0 00:05:37.676 EAL: Detected lcore 78 as core 25 on socket 0 00:05:37.676 EAL: Detected lcore 79 as core 26 on socket 0 00:05:37.676 EAL: Detected lcore 80 as core 27 on socket 0 00:05:37.676 EAL: Detected lcore 81 as core 28 on socket 0 00:05:37.676 EAL: Detected lcore 82 as core 29 on socket 0 00:05:37.676 EAL: Detected lcore 83 as core 30 on socket 0 00:05:37.676 EAL: Detected lcore 84 as core 0 on socket 1 00:05:37.676 EAL: Detected lcore 85 as core 1 on socket 1 00:05:37.676 EAL: Detected lcore 86 as core 2 on socket 1 00:05:37.676 EAL: Detected lcore 87 as core 3 on socket 1 00:05:37.676 EAL: Detected lcore 88 as core 4 on socket 1 00:05:37.676 EAL: Detected lcore 89 as core 5 on socket 1 00:05:37.676 EAL: Detected lcore 90 as core 6 on socket 1 00:05:37.676 EAL: Detected lcore 91 as core 8 on socket 1 00:05:37.676 EAL: Detected lcore 92 as core 9 on socket 1 00:05:37.676 EAL: Detected lcore 93 as core 10 on socket 1 00:05:37.676 EAL: Detected lcore 94 as core 11 on socket 1 00:05:37.676 EAL: Detected lcore 95 as core 12 on socket 1 00:05:37.676 EAL: Detected lcore 96 as core 13 on socket 1 00:05:37.676 EAL: Detected lcore 97 as core 14 on socket 1 00:05:37.676 EAL: Detected lcore 98 as core 16 on socket 1 00:05:37.676 EAL: Detected lcore 99 as core 17 on socket 1 00:05:37.676 EAL: Detected lcore 100 as core 18 on socket 1 00:05:37.676 EAL: Detected lcore 101 as core 19 on socket 1 00:05:37.676 EAL: Detected lcore 102 as core 20 on socket 1 00:05:37.676 EAL: Detected lcore 103 as core 21 on socket 1 00:05:37.676 EAL: Detected lcore 104 as core 22 on socket 1 00:05:37.676 EAL: Detected lcore 105 as core 24 on socket 1 00:05:37.676 EAL: Detected lcore 106 as core 25 on socket 1 00:05:37.676 EAL: Detected lcore 107 as core 26 on socket 1 00:05:37.676 EAL: Detected lcore 108 as core 27 on socket 1 00:05:37.676 EAL: Detected lcore 109 as core 28 on socket 1 00:05:37.676 EAL: Detected lcore 110 as core 29 on socket 1 00:05:37.676 EAL: Detected lcore 111 as core 30 on socket 1 00:05:37.676 EAL: Maximum logical cores by configuration: 128 00:05:37.676 EAL: Detected CPU lcores: 112 00:05:37.676 EAL: Detected NUMA nodes: 2 00:05:37.676 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:37.676 EAL: Detected shared linkage of DPDK 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:37.676 EAL: Registered [vdev] bus. 00:05:37.676 EAL: bus.vdev log level changed from disabled to notice 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:37.676 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.676 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:37.676 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:37.676 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.676 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Bus pci wants IOVA as 'DC' 00:05:37.677 EAL: Bus vdev wants IOVA as 'DC' 00:05:37.677 EAL: Buses did not request a specific IOVA mode. 00:05:37.677 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.677 EAL: Selected IOVA mode 'VA' 00:05:37.677 EAL: Probing VFIO support... 00:05:37.677 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.677 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.677 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.677 EAL: VFIO support initialized 00:05:37.677 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.677 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.677 EAL: Setting up physically contiguous memory... 00:05:37.677 EAL: Setting maximum number of open files to 524288 00:05:37.677 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.677 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.677 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.677 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.677 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.677 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.677 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.677 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.677 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.677 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.677 EAL: Hugepages will be freed exactly as allocated. 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: TSC frequency is ~2500000 KHz 00:05:37.677 EAL: Main lcore 0 is ready (tid=7f1291b25a00;cpuset=[0]) 00:05:37.677 EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 0 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.677 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:37.677 EAL: probe driver: 8086:37d2 net_i40e 00:05:37.677 EAL: Not managed by a supported kernel driver, skipped 00:05:37.677 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:37.677 EAL: probe driver: 8086:37d2 net_i40e 00:05:37.677 EAL: Not managed by a supported kernel driver, skipped 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.677 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.677 00:05:37.677 00:05:37.677 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.677 http://cunit.sourceforge.net/ 00:05:37.677 00:05:37.677 00:05:37.677 Suite: components_suite 00:05:37.677 Test: vtophys_malloc_test ...passed 00:05:37.677 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 4 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.677 EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 4 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.677 EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 4 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.677 EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 4 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.677 EAL: Trying to obtain current memory policy. 00:05:37.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.677 EAL: Restoring previous memory policy: 4 00:05:37.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.677 EAL: request: mp_malloc_sync 00:05:37.677 EAL: No shared files mode enabled, IPC is disabled 00:05:37.677 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.937 EAL: Trying to obtain current memory policy. 00:05:37.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.937 EAL: Restoring previous memory policy: 4 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.937 EAL: Trying to obtain current memory policy. 00:05:37.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.937 EAL: Restoring previous memory policy: 4 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.937 EAL: Trying to obtain current memory policy. 00:05:37.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.937 EAL: Restoring previous memory policy: 4 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.937 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.937 EAL: request: mp_malloc_sync 00:05:37.937 EAL: No shared files mode enabled, IPC is disabled 00:05:37.937 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.937 EAL: Trying to obtain current memory policy. 00:05:37.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.197 EAL: Restoring previous memory policy: 4 00:05:38.197 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.197 EAL: request: mp_malloc_sync 00:05:38.197 EAL: No shared files mode enabled, IPC is disabled 00:05:38.197 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.197 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.197 EAL: request: mp_malloc_sync 00:05:38.197 EAL: No shared files mode enabled, IPC is disabled 00:05:38.197 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.197 EAL: Trying to obtain current memory policy. 00:05:38.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.456 EAL: Restoring previous memory policy: 4 00:05:38.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.456 EAL: request: mp_malloc_sync 00:05:38.456 EAL: No shared files mode enabled, IPC is disabled 00:05:38.456 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.716 EAL: request: mp_malloc_sync 00:05:38.716 EAL: No shared files mode enabled, IPC is disabled 00:05:38.716 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.716 passed 00:05:38.716 00:05:38.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.716 suites 1 1 n/a 0 0 00:05:38.716 tests 2 2 2 0 0 00:05:38.716 asserts 497 497 497 0 n/a 00:05:38.716 00:05:38.716 Elapsed time = 0.964 seconds 00:05:38.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.716 EAL: request: mp_malloc_sync 00:05:38.716 EAL: No shared files mode enabled, IPC is disabled 00:05:38.716 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.716 EAL: No shared files mode enabled, IPC is disabled 00:05:38.716 EAL: No shared files mode enabled, IPC is disabled 00:05:38.716 EAL: No shared files mode enabled, IPC is disabled 00:05:38.716 00:05:38.716 real 0m1.089s 00:05:38.716 user 0m0.631s 00:05:38.716 sys 0m0.427s 00:05:38.716 15:24:16 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.716 15:24:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.716 ************************************ 00:05:38.716 END TEST env_vtophys 00:05:38.716 ************************************ 00:05:38.716 15:24:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.716 15:24:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.716 15:24:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.716 15:24:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.976 ************************************ 00:05:38.976 START TEST env_pci 00:05:38.976 ************************************ 00:05:38.976 15:24:16 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.976 00:05:38.976 00:05:38.976 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.976 http://cunit.sourceforge.net/ 00:05:38.976 00:05:38.976 00:05:38.976 Suite: pci 00:05:38.976 Test: pci_hook ...[2024-11-03 15:24:16.549435] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2087162 has claimed it 00:05:38.976 EAL: Cannot find device (10000:00:01.0) 00:05:38.976 EAL: Failed to attach device on primary process 00:05:38.976 passed 00:05:38.976 00:05:38.976 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.976 suites 1 1 n/a 0 0 00:05:38.976 tests 1 1 1 0 0 00:05:38.976 asserts 25 25 25 0 n/a 00:05:38.976 00:05:38.976 Elapsed time = 0.033 seconds 00:05:38.976 00:05:38.976 real 0m0.051s 00:05:38.976 user 0m0.015s 00:05:38.976 sys 0m0.035s 00:05:38.976 15:24:16 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.976 15:24:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.976 ************************************ 00:05:38.976 END TEST env_pci 00:05:38.976 ************************************ 00:05:38.976 15:24:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.976 15:24:16 env -- env/env.sh@15 -- # uname 00:05:38.976 15:24:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.976 15:24:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.976 15:24:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.976 15:24:16 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:38.976 15:24:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.976 15:24:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.976 ************************************ 00:05:38.976 START TEST env_dpdk_post_init 00:05:38.976 ************************************ 00:05:38.976 15:24:16 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.976 EAL: Detected CPU lcores: 112 00:05:38.976 EAL: Detected NUMA nodes: 2 00:05:38.976 EAL: Detected shared linkage of DPDK 00:05:38.976 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.976 EAL: Selected IOVA mode 'VA' 00:05:38.976 EAL: VFIO support initialized 00:05:38.976 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.235 EAL: Using IOMMU type 1 (Type 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.235 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:39.235 EAL: Ignore mapping IO port bar(1) 00:05:39.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:39.236 EAL: Ignore mapping IO port bar(1) 00:05:39.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:40.173 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:44.364 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:44.364 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:44.364 Starting DPDK initialization... 00:05:44.364 Starting SPDK post initialization... 00:05:44.364 SPDK NVMe probe 00:05:44.364 Attaching to 0000:d8:00.0 00:05:44.364 Attached to 0000:d8:00.0 00:05:44.364 Cleaning up... 00:05:44.364 00:05:44.364 real 0m5.345s 00:05:44.364 user 0m3.977s 00:05:44.364 sys 0m0.424s 00:05:44.364 15:24:22 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.364 15:24:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.364 ************************************ 00:05:44.364 END TEST env_dpdk_post_init 00:05:44.364 ************************************ 00:05:44.364 15:24:22 env -- env/env.sh@26 -- # uname 00:05:44.364 15:24:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.364 15:24:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.364 15:24:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.364 15:24:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.364 15:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.364 ************************************ 00:05:44.364 START TEST env_mem_callbacks 00:05:44.364 ************************************ 00:05:44.364 15:24:22 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.364 EAL: Detected CPU lcores: 112 00:05:44.364 EAL: Detected NUMA nodes: 2 00:05:44.364 EAL: Detected shared linkage of DPDK 00:05:44.364 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.623 EAL: Selected IOVA mode 'VA' 00:05:44.623 EAL: VFIO support initialized 00:05:44.623 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.623 00:05:44.623 00:05:44.623 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.623 http://cunit.sourceforge.net/ 00:05:44.623 00:05:44.623 00:05:44.623 Suite: memory 00:05:44.623 Test: test ... 00:05:44.623 register 0x200000200000 2097152 00:05:44.623 malloc 3145728 00:05:44.623 register 0x200000400000 4194304 00:05:44.623 buf 0x200000500000 len 3145728 PASSED 00:05:44.623 malloc 64 00:05:44.623 buf 0x2000004fff40 len 64 PASSED 00:05:44.623 malloc 4194304 00:05:44.623 register 0x200000800000 6291456 00:05:44.623 buf 0x200000a00000 len 4194304 PASSED 00:05:44.623 free 0x200000500000 3145728 00:05:44.623 free 0x2000004fff40 64 00:05:44.623 unregister 0x200000400000 4194304 PASSED 00:05:44.623 free 0x200000a00000 4194304 00:05:44.623 unregister 0x200000800000 6291456 PASSED 00:05:44.623 malloc 8388608 00:05:44.623 register 0x200000400000 10485760 00:05:44.623 buf 0x200000600000 len 8388608 PASSED 00:05:44.623 free 0x200000600000 8388608 00:05:44.623 unregister 0x200000400000 10485760 PASSED 00:05:44.623 passed 00:05:44.623 00:05:44.623 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.623 suites 1 1 n/a 0 0 00:05:44.623 tests 1 1 1 0 0 00:05:44.623 asserts 15 15 15 0 n/a 00:05:44.623 00:05:44.623 Elapsed time = 0.005 seconds 00:05:44.623 00:05:44.623 real 0m0.064s 00:05:44.623 user 0m0.025s 00:05:44.623 sys 0m0.039s 00:05:44.623 15:24:22 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.623 15:24:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.623 ************************************ 00:05:44.623 END TEST env_mem_callbacks 00:05:44.623 ************************************ 00:05:44.623 00:05:44.623 real 0m7.304s 00:05:44.623 user 0m5.043s 00:05:44.623 sys 0m1.335s 00:05:44.623 15:24:22 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.623 15:24:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.623 ************************************ 00:05:44.623 END TEST env 00:05:44.623 ************************************ 00:05:44.623 15:24:22 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.623 15:24:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.623 15:24:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.623 15:24:22 -- common/autotest_common.sh@10 -- # set +x 00:05:44.623 ************************************ 00:05:44.623 START TEST rpc 00:05:44.623 ************************************ 00:05:44.623 15:24:22 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.623 * Looking for test storage... 00:05:44.623 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:44.623 15:24:22 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.623 15:24:22 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.623 15:24:22 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.883 15:24:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.883 15:24:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.883 15:24:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.883 15:24:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.883 15:24:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.883 15:24:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.883 15:24:22 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.883 15:24:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.883 15:24:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.883 15:24:22 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.883 15:24:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.883 15:24:22 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.883 15:24:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.883 15:24:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.883 15:24:22 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.883 --rc genhtml_branch_coverage=1 00:05:44.883 --rc genhtml_function_coverage=1 00:05:44.883 --rc genhtml_legend=1 00:05:44.883 --rc geninfo_all_blocks=1 00:05:44.883 --rc geninfo_unexecuted_blocks=1 00:05:44.883 00:05:44.883 ' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.883 --rc genhtml_branch_coverage=1 00:05:44.883 --rc genhtml_function_coverage=1 00:05:44.883 --rc genhtml_legend=1 00:05:44.883 --rc geninfo_all_blocks=1 00:05:44.883 --rc geninfo_unexecuted_blocks=1 00:05:44.883 00:05:44.883 ' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.883 --rc genhtml_branch_coverage=1 00:05:44.883 --rc genhtml_function_coverage=1 00:05:44.883 --rc genhtml_legend=1 00:05:44.883 --rc geninfo_all_blocks=1 00:05:44.883 --rc geninfo_unexecuted_blocks=1 00:05:44.883 00:05:44.883 ' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.883 --rc genhtml_branch_coverage=1 00:05:44.883 --rc genhtml_function_coverage=1 00:05:44.883 --rc genhtml_legend=1 00:05:44.883 --rc geninfo_all_blocks=1 00:05:44.883 --rc geninfo_unexecuted_blocks=1 00:05:44.883 00:05:44.883 ' 00:05:44.883 15:24:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2088269 00:05:44.883 15:24:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.883 15:24:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.883 15:24:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2088269 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@833 -- # '[' -z 2088269 ']' 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.883 15:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.883 [2024-11-03 15:24:22.538172] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:44.883 [2024-11-03 15:24:22.538227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088269 ] 00:05:44.883 [2024-11-03 15:24:22.615842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.883 [2024-11-03 15:24:22.637468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.883 [2024-11-03 15:24:22.637522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2088269' to capture a snapshot of events at runtime. 00:05:44.883 [2024-11-03 15:24:22.637532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.883 [2024-11-03 15:24:22.637541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.883 [2024-11-03 15:24:22.637549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2088269 for offline analysis/debug. 00:05:44.883 [2024-11-03 15:24:22.638169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.142 15:24:22 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.142 15:24:22 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:45.142 15:24:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.142 15:24:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.142 15:24:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.142 15:24:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.142 15:24:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.142 15:24:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.142 15:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.142 ************************************ 00:05:45.142 START TEST rpc_integrity 00:05:45.142 ************************************ 00:05:45.142 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:45.142 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.142 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.142 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.142 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.142 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.142 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.422 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.422 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.422 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.422 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.422 15:24:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.422 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.422 { 00:05:45.422 "name": "Malloc0", 00:05:45.422 "aliases": [ 00:05:45.422 "e7c3140f-5ea0-4717-8f67-532a54794270" 00:05:45.422 ], 00:05:45.422 "product_name": "Malloc disk", 00:05:45.422 "block_size": 512, 00:05:45.422 "num_blocks": 16384, 00:05:45.422 "uuid": "e7c3140f-5ea0-4717-8f67-532a54794270", 00:05:45.422 "assigned_rate_limits": { 00:05:45.422 "rw_ios_per_sec": 0, 00:05:45.422 "rw_mbytes_per_sec": 0, 00:05:45.422 "r_mbytes_per_sec": 0, 00:05:45.422 "w_mbytes_per_sec": 0 00:05:45.422 }, 00:05:45.422 "claimed": false, 00:05:45.422 "zoned": false, 00:05:45.422 "supported_io_types": { 00:05:45.422 "read": true, 00:05:45.422 "write": true, 00:05:45.422 "unmap": true, 00:05:45.422 "flush": true, 00:05:45.422 "reset": true, 00:05:45.422 "nvme_admin": false, 00:05:45.422 "nvme_io": false, 00:05:45.422 "nvme_io_md": false, 00:05:45.422 "write_zeroes": true, 00:05:45.422 "zcopy": true, 00:05:45.422 "get_zone_info": false, 00:05:45.422 "zone_management": false, 00:05:45.422 "zone_append": false, 00:05:45.422 "compare": false, 00:05:45.422 "compare_and_write": false, 00:05:45.422 "abort": true, 00:05:45.422 "seek_hole": false, 00:05:45.422 "seek_data": false, 00:05:45.422 "copy": true, 00:05:45.422 "nvme_iov_md": false 00:05:45.422 }, 00:05:45.422 "memory_domains": [ 00:05:45.423 { 00:05:45.423 "dma_device_id": "system", 00:05:45.423 "dma_device_type": 1 00:05:45.423 }, 00:05:45.423 { 00:05:45.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.423 "dma_device_type": 2 00:05:45.423 } 00:05:45.423 ], 00:05:45.423 "driver_specific": {} 00:05:45.423 } 00:05:45.423 ]' 00:05:45.423 15:24:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.423 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.423 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.423 [2024-11-03 15:24:23.007795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.423 [2024-11-03 15:24:23.007825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.423 [2024-11-03 15:24:23.007840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x954300 00:05:45.423 [2024-11-03 15:24:23.007848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.423 [2024-11-03 15:24:23.008947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.423 [2024-11-03 15:24:23.008977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.423 Passthru0 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.423 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.423 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.423 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.423 { 00:05:45.423 "name": "Malloc0", 00:05:45.423 "aliases": [ 00:05:45.423 "e7c3140f-5ea0-4717-8f67-532a54794270" 00:05:45.423 ], 00:05:45.423 "product_name": "Malloc disk", 00:05:45.423 "block_size": 512, 00:05:45.423 "num_blocks": 16384, 00:05:45.423 "uuid": "e7c3140f-5ea0-4717-8f67-532a54794270", 00:05:45.423 "assigned_rate_limits": { 00:05:45.423 "rw_ios_per_sec": 0, 00:05:45.423 "rw_mbytes_per_sec": 0, 00:05:45.423 "r_mbytes_per_sec": 0, 00:05:45.423 "w_mbytes_per_sec": 0 00:05:45.423 }, 00:05:45.423 "claimed": true, 00:05:45.423 "claim_type": "exclusive_write", 00:05:45.423 "zoned": false, 00:05:45.423 "supported_io_types": { 00:05:45.423 "read": true, 00:05:45.423 "write": true, 00:05:45.423 "unmap": true, 00:05:45.423 "flush": true, 00:05:45.423 "reset": true, 00:05:45.423 "nvme_admin": false, 00:05:45.423 "nvme_io": false, 00:05:45.423 "nvme_io_md": false, 00:05:45.423 "write_zeroes": true, 00:05:45.423 "zcopy": true, 00:05:45.423 "get_zone_info": false, 00:05:45.423 "zone_management": false, 00:05:45.423 "zone_append": false, 00:05:45.423 "compare": false, 00:05:45.423 "compare_and_write": false, 00:05:45.423 "abort": true, 00:05:45.423 "seek_hole": false, 00:05:45.423 "seek_data": false, 00:05:45.423 "copy": true, 00:05:45.423 "nvme_iov_md": false 00:05:45.423 }, 00:05:45.423 "memory_domains": [ 00:05:45.423 { 00:05:45.423 "dma_device_id": "system", 00:05:45.423 "dma_device_type": 1 00:05:45.423 }, 00:05:45.423 { 00:05:45.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.423 "dma_device_type": 2 00:05:45.423 } 00:05:45.423 ], 00:05:45.423 "driver_specific": {} 00:05:45.423 }, 00:05:45.423 { 00:05:45.423 "name": "Passthru0", 00:05:45.423 "aliases": [ 00:05:45.423 "cf7ee81e-fbf2-5285-a285-c8faafd00f91" 00:05:45.423 ], 00:05:45.423 "product_name": "passthru", 00:05:45.423 "block_size": 512, 00:05:45.423 "num_blocks": 16384, 00:05:45.424 "uuid": "cf7ee81e-fbf2-5285-a285-c8faafd00f91", 00:05:45.424 "assigned_rate_limits": { 00:05:45.424 "rw_ios_per_sec": 0, 00:05:45.424 "rw_mbytes_per_sec": 0, 00:05:45.424 "r_mbytes_per_sec": 0, 00:05:45.424 "w_mbytes_per_sec": 0 00:05:45.424 }, 00:05:45.424 "claimed": false, 00:05:45.424 "zoned": false, 00:05:45.424 "supported_io_types": { 00:05:45.424 "read": true, 00:05:45.424 "write": true, 00:05:45.424 "unmap": true, 00:05:45.424 "flush": true, 00:05:45.424 "reset": true, 00:05:45.424 "nvme_admin": false, 00:05:45.424 "nvme_io": false, 00:05:45.424 "nvme_io_md": false, 00:05:45.424 "write_zeroes": true, 00:05:45.424 "zcopy": true, 00:05:45.424 "get_zone_info": false, 00:05:45.424 "zone_management": false, 00:05:45.424 "zone_append": false, 00:05:45.424 "compare": false, 00:05:45.424 "compare_and_write": false, 00:05:45.424 "abort": true, 00:05:45.424 "seek_hole": false, 00:05:45.424 "seek_data": false, 00:05:45.424 "copy": true, 00:05:45.424 "nvme_iov_md": false 00:05:45.424 }, 00:05:45.424 "memory_domains": [ 00:05:45.424 { 00:05:45.424 "dma_device_id": "system", 00:05:45.424 "dma_device_type": 1 00:05:45.424 }, 00:05:45.424 { 00:05:45.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.424 "dma_device_type": 2 00:05:45.424 } 00:05:45.424 ], 00:05:45.424 "driver_specific": { 00:05:45.424 "passthru": { 00:05:45.424 "name": "Passthru0", 00:05:45.424 "base_bdev_name": "Malloc0" 00:05:45.424 } 00:05:45.424 } 00:05:45.424 } 00:05:45.424 ]' 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.424 15:24:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.424 00:05:45.424 real 0m0.269s 00:05:45.424 user 0m0.175s 00:05:45.424 sys 0m0.043s 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.424 15:24:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.424 ************************************ 00:05:45.424 END TEST rpc_integrity 00:05:45.424 ************************************ 00:05:45.424 15:24:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.424 15:24:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.424 15:24:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.424 15:24:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 ************************************ 00:05:45.687 START TEST rpc_plugins 00:05:45.687 ************************************ 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:45.687 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.687 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.687 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.687 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.687 { 00:05:45.687 "name": "Malloc1", 00:05:45.687 "aliases": [ 00:05:45.687 "183d5db2-e27d-4242-a242-904e85b5ab1f" 00:05:45.687 ], 00:05:45.687 "product_name": "Malloc disk", 00:05:45.687 "block_size": 4096, 00:05:45.687 "num_blocks": 256, 00:05:45.687 "uuid": "183d5db2-e27d-4242-a242-904e85b5ab1f", 00:05:45.687 "assigned_rate_limits": { 00:05:45.687 "rw_ios_per_sec": 0, 00:05:45.687 "rw_mbytes_per_sec": 0, 00:05:45.687 "r_mbytes_per_sec": 0, 00:05:45.687 "w_mbytes_per_sec": 0 00:05:45.687 }, 00:05:45.687 "claimed": false, 00:05:45.687 "zoned": false, 00:05:45.687 "supported_io_types": { 00:05:45.687 "read": true, 00:05:45.687 "write": true, 00:05:45.687 "unmap": true, 00:05:45.687 "flush": true, 00:05:45.687 "reset": true, 00:05:45.687 "nvme_admin": false, 00:05:45.687 "nvme_io": false, 00:05:45.687 "nvme_io_md": false, 00:05:45.687 "write_zeroes": true, 00:05:45.687 "zcopy": true, 00:05:45.687 "get_zone_info": false, 00:05:45.687 "zone_management": false, 00:05:45.687 "zone_append": false, 00:05:45.687 "compare": false, 00:05:45.687 "compare_and_write": false, 00:05:45.687 "abort": true, 00:05:45.688 "seek_hole": false, 00:05:45.688 "seek_data": false, 00:05:45.688 "copy": true, 00:05:45.688 "nvme_iov_md": false 00:05:45.688 }, 00:05:45.688 "memory_domains": [ 00:05:45.688 { 00:05:45.688 "dma_device_id": "system", 00:05:45.688 "dma_device_type": 1 00:05:45.688 }, 00:05:45.688 { 00:05:45.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.688 "dma_device_type": 2 00:05:45.688 } 00:05:45.688 ], 00:05:45.688 "driver_specific": {} 00:05:45.688 } 00:05:45.688 ]' 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.688 15:24:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.688 00:05:45.688 real 0m0.133s 00:05:45.688 user 0m0.076s 00:05:45.688 sys 0m0.021s 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.688 15:24:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.688 ************************************ 00:05:45.688 END TEST rpc_plugins 00:05:45.688 ************************************ 00:05:45.688 15:24:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.688 15:24:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.688 15:24:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.688 15:24:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.688 ************************************ 00:05:45.688 START TEST rpc_trace_cmd_test 00:05:45.688 ************************************ 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.688 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2088269", 00:05:45.688 "tpoint_group_mask": "0x8", 00:05:45.688 "iscsi_conn": { 00:05:45.688 "mask": "0x2", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "scsi": { 00:05:45.688 "mask": "0x4", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "bdev": { 00:05:45.688 "mask": "0x8", 00:05:45.688 "tpoint_mask": "0xffffffffffffffff" 00:05:45.688 }, 00:05:45.688 "nvmf_rdma": { 00:05:45.688 "mask": "0x10", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "nvmf_tcp": { 00:05:45.688 "mask": "0x20", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "ftl": { 00:05:45.688 "mask": "0x40", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "blobfs": { 00:05:45.688 "mask": "0x80", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "dsa": { 00:05:45.688 "mask": "0x200", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "thread": { 00:05:45.688 "mask": "0x400", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "nvme_pcie": { 00:05:45.688 "mask": "0x800", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "iaa": { 00:05:45.688 "mask": "0x1000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "nvme_tcp": { 00:05:45.688 "mask": "0x2000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "bdev_nvme": { 00:05:45.688 "mask": "0x4000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "sock": { 00:05:45.688 "mask": "0x8000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "blob": { 00:05:45.688 "mask": "0x10000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "bdev_raid": { 00:05:45.688 "mask": "0x20000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 }, 00:05:45.688 "scheduler": { 00:05:45.688 "mask": "0x40000", 00:05:45.688 "tpoint_mask": "0x0" 00:05:45.688 } 00:05:45.688 }' 00:05:45.688 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.947 00:05:45.947 real 0m0.216s 00:05:45.947 user 0m0.182s 00:05:45.947 sys 0m0.028s 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.947 15:24:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.947 ************************************ 00:05:45.947 END TEST rpc_trace_cmd_test 00:05:45.947 ************************************ 00:05:45.947 15:24:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.947 15:24:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.947 15:24:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.947 15:24:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.947 15:24:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.947 15:24:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.206 ************************************ 00:05:46.206 START TEST rpc_daemon_integrity 00:05:46.206 ************************************ 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.207 { 00:05:46.207 "name": "Malloc2", 00:05:46.207 "aliases": [ 00:05:46.207 "e7c30a1f-4bc7-40b4-9758-0a0f94cb1c67" 00:05:46.207 ], 00:05:46.207 "product_name": "Malloc disk", 00:05:46.207 "block_size": 512, 00:05:46.207 "num_blocks": 16384, 00:05:46.207 "uuid": "e7c30a1f-4bc7-40b4-9758-0a0f94cb1c67", 00:05:46.207 "assigned_rate_limits": { 00:05:46.207 "rw_ios_per_sec": 0, 00:05:46.207 "rw_mbytes_per_sec": 0, 00:05:46.207 "r_mbytes_per_sec": 0, 00:05:46.207 "w_mbytes_per_sec": 0 00:05:46.207 }, 00:05:46.207 "claimed": false, 00:05:46.207 "zoned": false, 00:05:46.207 "supported_io_types": { 00:05:46.207 "read": true, 00:05:46.207 "write": true, 00:05:46.207 "unmap": true, 00:05:46.207 "flush": true, 00:05:46.207 "reset": true, 00:05:46.207 "nvme_admin": false, 00:05:46.207 "nvme_io": false, 00:05:46.207 "nvme_io_md": false, 00:05:46.207 "write_zeroes": true, 00:05:46.207 "zcopy": true, 00:05:46.207 "get_zone_info": false, 00:05:46.207 "zone_management": false, 00:05:46.207 "zone_append": false, 00:05:46.207 "compare": false, 00:05:46.207 "compare_and_write": false, 00:05:46.207 "abort": true, 00:05:46.207 "seek_hole": false, 00:05:46.207 "seek_data": false, 00:05:46.207 "copy": true, 00:05:46.207 "nvme_iov_md": false 00:05:46.207 }, 00:05:46.207 "memory_domains": [ 00:05:46.207 { 00:05:46.207 "dma_device_id": "system", 00:05:46.207 "dma_device_type": 1 00:05:46.207 }, 00:05:46.207 { 00:05:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.207 "dma_device_type": 2 00:05:46.207 } 00:05:46.207 ], 00:05:46.207 "driver_specific": {} 00:05:46.207 } 00:05:46.207 ]' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 [2024-11-03 15:24:23.878149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.207 [2024-11-03 15:24:23.878180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.207 [2024-11-03 15:24:23.878194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x942050 00:05:46.207 [2024-11-03 15:24:23.878202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.207 [2024-11-03 15:24:23.879130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.207 [2024-11-03 15:24:23.879155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.207 Passthru0 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.207 { 00:05:46.207 "name": "Malloc2", 00:05:46.207 "aliases": [ 00:05:46.207 "e7c30a1f-4bc7-40b4-9758-0a0f94cb1c67" 00:05:46.207 ], 00:05:46.207 "product_name": "Malloc disk", 00:05:46.207 "block_size": 512, 00:05:46.207 "num_blocks": 16384, 00:05:46.207 "uuid": "e7c30a1f-4bc7-40b4-9758-0a0f94cb1c67", 00:05:46.207 "assigned_rate_limits": { 00:05:46.207 "rw_ios_per_sec": 0, 00:05:46.207 "rw_mbytes_per_sec": 0, 00:05:46.207 "r_mbytes_per_sec": 0, 00:05:46.207 "w_mbytes_per_sec": 0 00:05:46.207 }, 00:05:46.207 "claimed": true, 00:05:46.207 "claim_type": "exclusive_write", 00:05:46.207 "zoned": false, 00:05:46.207 "supported_io_types": { 00:05:46.207 "read": true, 00:05:46.207 "write": true, 00:05:46.207 "unmap": true, 00:05:46.207 "flush": true, 00:05:46.207 "reset": true, 00:05:46.207 "nvme_admin": false, 00:05:46.207 "nvme_io": false, 00:05:46.207 "nvme_io_md": false, 00:05:46.207 "write_zeroes": true, 00:05:46.207 "zcopy": true, 00:05:46.207 "get_zone_info": false, 00:05:46.207 "zone_management": false, 00:05:46.207 "zone_append": false, 00:05:46.207 "compare": false, 00:05:46.207 "compare_and_write": false, 00:05:46.207 "abort": true, 00:05:46.207 "seek_hole": false, 00:05:46.207 "seek_data": false, 00:05:46.207 "copy": true, 00:05:46.207 "nvme_iov_md": false 00:05:46.207 }, 00:05:46.207 "memory_domains": [ 00:05:46.207 { 00:05:46.207 "dma_device_id": "system", 00:05:46.207 "dma_device_type": 1 00:05:46.207 }, 00:05:46.207 { 00:05:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.207 "dma_device_type": 2 00:05:46.207 } 00:05:46.207 ], 00:05:46.207 "driver_specific": {} 00:05:46.207 }, 00:05:46.207 { 00:05:46.207 "name": "Passthru0", 00:05:46.207 "aliases": [ 00:05:46.207 "da84b1eb-21b8-5ac2-adce-ec72ce7825d0" 00:05:46.207 ], 00:05:46.207 "product_name": "passthru", 00:05:46.207 "block_size": 512, 00:05:46.207 "num_blocks": 16384, 00:05:46.207 "uuid": "da84b1eb-21b8-5ac2-adce-ec72ce7825d0", 00:05:46.207 "assigned_rate_limits": { 00:05:46.207 "rw_ios_per_sec": 0, 00:05:46.207 "rw_mbytes_per_sec": 0, 00:05:46.207 "r_mbytes_per_sec": 0, 00:05:46.207 "w_mbytes_per_sec": 0 00:05:46.207 }, 00:05:46.207 "claimed": false, 00:05:46.207 "zoned": false, 00:05:46.207 "supported_io_types": { 00:05:46.207 "read": true, 00:05:46.207 "write": true, 00:05:46.207 "unmap": true, 00:05:46.207 "flush": true, 00:05:46.207 "reset": true, 00:05:46.207 "nvme_admin": false, 00:05:46.207 "nvme_io": false, 00:05:46.207 "nvme_io_md": false, 00:05:46.207 "write_zeroes": true, 00:05:46.207 "zcopy": true, 00:05:46.207 "get_zone_info": false, 00:05:46.207 "zone_management": false, 00:05:46.207 "zone_append": false, 00:05:46.207 "compare": false, 00:05:46.207 "compare_and_write": false, 00:05:46.207 "abort": true, 00:05:46.207 "seek_hole": false, 00:05:46.207 "seek_data": false, 00:05:46.207 "copy": true, 00:05:46.207 "nvme_iov_md": false 00:05:46.207 }, 00:05:46.207 "memory_domains": [ 00:05:46.207 { 00:05:46.207 "dma_device_id": "system", 00:05:46.207 "dma_device_type": 1 00:05:46.207 }, 00:05:46.207 { 00:05:46.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.207 "dma_device_type": 2 00:05:46.207 } 00:05:46.207 ], 00:05:46.207 "driver_specific": { 00:05:46.207 "passthru": { 00:05:46.207 "name": "Passthru0", 00:05:46.207 "base_bdev_name": "Malloc2" 00:05:46.207 } 00:05:46.207 } 00:05:46.207 } 00:05:46.207 ]' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.207 15:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.467 15:24:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.467 00:05:46.467 real 0m0.278s 00:05:46.467 user 0m0.180s 00:05:46.467 sys 0m0.045s 00:05:46.467 15:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.467 15:24:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 ************************************ 00:05:46.467 END TEST rpc_daemon_integrity 00:05:46.467 ************************************ 00:05:46.467 15:24:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.467 15:24:24 rpc -- rpc/rpc.sh@84 -- # killprocess 2088269 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@952 -- # '[' -z 2088269 ']' 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@956 -- # kill -0 2088269 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@957 -- # uname 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2088269 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2088269' 00:05:46.467 killing process with pid 2088269 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@971 -- # kill 2088269 00:05:46.467 15:24:24 rpc -- common/autotest_common.sh@976 -- # wait 2088269 00:05:46.726 00:05:46.726 real 0m2.125s 00:05:46.726 user 0m2.653s 00:05:46.726 sys 0m0.814s 00:05:46.726 15:24:24 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.726 15:24:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 ************************************ 00:05:46.726 END TEST rpc 00:05:46.726 ************************************ 00:05:46.726 15:24:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.726 15:24:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.726 15:24:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.726 15:24:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 ************************************ 00:05:46.726 START TEST skip_rpc 00:05:46.726 ************************************ 00:05:46.726 15:24:24 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.986 * Looking for test storage... 00:05:46.986 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.986 15:24:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.986 --rc genhtml_branch_coverage=1 00:05:46.986 --rc genhtml_function_coverage=1 00:05:46.986 --rc genhtml_legend=1 00:05:46.986 --rc geninfo_all_blocks=1 00:05:46.986 --rc geninfo_unexecuted_blocks=1 00:05:46.986 00:05:46.986 ' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.986 --rc genhtml_branch_coverage=1 00:05:46.986 --rc genhtml_function_coverage=1 00:05:46.986 --rc genhtml_legend=1 00:05:46.986 --rc geninfo_all_blocks=1 00:05:46.986 --rc geninfo_unexecuted_blocks=1 00:05:46.986 00:05:46.986 ' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.986 --rc genhtml_branch_coverage=1 00:05:46.986 --rc genhtml_function_coverage=1 00:05:46.986 --rc genhtml_legend=1 00:05:46.986 --rc geninfo_all_blocks=1 00:05:46.986 --rc geninfo_unexecuted_blocks=1 00:05:46.986 00:05:46.986 ' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.986 --rc genhtml_branch_coverage=1 00:05:46.986 --rc genhtml_function_coverage=1 00:05:46.986 --rc genhtml_legend=1 00:05:46.986 --rc geninfo_all_blocks=1 00:05:46.986 --rc geninfo_unexecuted_blocks=1 00:05:46.986 00:05:46.986 ' 00:05:46.986 15:24:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:46.986 15:24:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:46.986 15:24:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.986 15:24:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.986 ************************************ 00:05:46.986 START TEST skip_rpc 00:05:46.986 ************************************ 00:05:46.986 15:24:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:46.986 15:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2088910 00:05:46.986 15:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.986 15:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.986 15:24:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.246 [2024-11-03 15:24:24.780904] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:47.246 [2024-11-03 15:24:24.780952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088910 ] 00:05:47.246 [2024-11-03 15:24:24.858208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.246 [2024-11-03 15:24:24.880238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2088910 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2088910 ']' 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2088910 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2088910 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2088910' 00:05:52.576 killing process with pid 2088910 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2088910 00:05:52.576 15:24:29 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2088910 00:05:52.576 00:05:52.576 real 0m5.371s 00:05:52.576 user 0m5.128s 00:05:52.576 sys 0m0.294s 00:05:52.576 15:24:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.576 15:24:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.576 ************************************ 00:05:52.576 END TEST skip_rpc 00:05:52.576 ************************************ 00:05:52.576 15:24:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.576 15:24:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.576 15:24:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.576 15:24:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.576 ************************************ 00:05:52.576 START TEST skip_rpc_with_json 00:05:52.576 ************************************ 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2089837 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2089837 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2089837 ']' 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.576 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.576 [2024-11-03 15:24:30.218928] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:52.576 [2024-11-03 15:24:30.218978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089837 ] 00:05:52.576 [2024-11-03 15:24:30.297231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.576 [2024-11-03 15:24:30.320221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.835 [2024-11-03 15:24:30.525138] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:52.835 request: 00:05:52.835 { 00:05:52.835 "trtype": "tcp", 00:05:52.835 "method": "nvmf_get_transports", 00:05:52.835 "req_id": 1 00:05:52.835 } 00:05:52.835 Got JSON-RPC error response 00:05:52.835 response: 00:05:52.835 { 00:05:52.835 "code": -19, 00:05:52.835 "message": "No such device" 00:05:52.835 } 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.835 [2024-11-03 15:24:30.537248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.835 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.095 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.095 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:53.095 { 00:05:53.095 "subsystems": [ 00:05:53.095 { 00:05:53.095 "subsystem": "fsdev", 00:05:53.095 "config": [ 00:05:53.095 { 00:05:53.095 "method": "fsdev_set_opts", 00:05:53.095 "params": { 00:05:53.095 "fsdev_io_pool_size": 65535, 00:05:53.095 "fsdev_io_cache_size": 256 00:05:53.095 } 00:05:53.095 } 00:05:53.095 ] 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "subsystem": "keyring", 00:05:53.095 "config": [] 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "subsystem": "iobuf", 00:05:53.095 "config": [ 00:05:53.095 { 00:05:53.095 "method": "iobuf_set_options", 00:05:53.095 "params": { 00:05:53.095 "small_pool_count": 8192, 00:05:53.095 "large_pool_count": 1024, 00:05:53.095 "small_bufsize": 8192, 00:05:53.095 "large_bufsize": 135168, 00:05:53.095 "enable_numa": false 00:05:53.095 } 00:05:53.095 } 00:05:53.095 ] 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "subsystem": "sock", 00:05:53.095 "config": [ 00:05:53.095 { 00:05:53.095 "method": "sock_set_default_impl", 00:05:53.095 "params": { 00:05:53.095 "impl_name": "posix" 00:05:53.095 } 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "method": "sock_impl_set_options", 00:05:53.095 "params": { 00:05:53.095 "impl_name": "ssl", 00:05:53.095 "recv_buf_size": 4096, 00:05:53.095 "send_buf_size": 4096, 00:05:53.095 "enable_recv_pipe": true, 00:05:53.095 "enable_quickack": false, 00:05:53.095 "enable_placement_id": 0, 00:05:53.095 "enable_zerocopy_send_server": true, 00:05:53.095 "enable_zerocopy_send_client": false, 00:05:53.095 "zerocopy_threshold": 0, 00:05:53.095 "tls_version": 0, 00:05:53.095 "enable_ktls": false 00:05:53.095 } 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "method": "sock_impl_set_options", 00:05:53.095 "params": { 00:05:53.095 "impl_name": "posix", 00:05:53.095 "recv_buf_size": 2097152, 00:05:53.095 "send_buf_size": 2097152, 00:05:53.095 "enable_recv_pipe": true, 00:05:53.095 "enable_quickack": false, 00:05:53.095 "enable_placement_id": 0, 00:05:53.095 "enable_zerocopy_send_server": true, 00:05:53.095 "enable_zerocopy_send_client": false, 00:05:53.095 "zerocopy_threshold": 0, 00:05:53.095 "tls_version": 0, 00:05:53.095 "enable_ktls": false 00:05:53.095 } 00:05:53.095 } 00:05:53.095 ] 00:05:53.095 }, 00:05:53.095 { 00:05:53.095 "subsystem": "vmd", 00:05:53.096 "config": [] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "accel", 00:05:53.096 "config": [ 00:05:53.096 { 00:05:53.096 "method": "accel_set_options", 00:05:53.096 "params": { 00:05:53.096 "small_cache_size": 128, 00:05:53.096 "large_cache_size": 16, 00:05:53.096 "task_count": 2048, 00:05:53.096 "sequence_count": 2048, 00:05:53.096 "buf_count": 2048 00:05:53.096 } 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "bdev", 00:05:53.096 "config": [ 00:05:53.096 { 00:05:53.096 "method": "bdev_set_options", 00:05:53.096 "params": { 00:05:53.096 "bdev_io_pool_size": 65535, 00:05:53.096 "bdev_io_cache_size": 256, 00:05:53.096 "bdev_auto_examine": true, 00:05:53.096 "iobuf_small_cache_size": 128, 00:05:53.096 "iobuf_large_cache_size": 16 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "bdev_raid_set_options", 00:05:53.096 "params": { 00:05:53.096 "process_window_size_kb": 1024, 00:05:53.096 "process_max_bandwidth_mb_sec": 0 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "bdev_iscsi_set_options", 00:05:53.096 "params": { 00:05:53.096 "timeout_sec": 30 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "bdev_nvme_set_options", 00:05:53.096 "params": { 00:05:53.096 "action_on_timeout": "none", 00:05:53.096 "timeout_us": 0, 00:05:53.096 "timeout_admin_us": 0, 00:05:53.096 "keep_alive_timeout_ms": 10000, 00:05:53.096 "arbitration_burst": 0, 00:05:53.096 "low_priority_weight": 0, 00:05:53.096 "medium_priority_weight": 0, 00:05:53.096 "high_priority_weight": 0, 00:05:53.096 "nvme_adminq_poll_period_us": 10000, 00:05:53.096 "nvme_ioq_poll_period_us": 0, 00:05:53.096 "io_queue_requests": 0, 00:05:53.096 "delay_cmd_submit": true, 00:05:53.096 "transport_retry_count": 4, 00:05:53.096 "bdev_retry_count": 3, 00:05:53.096 "transport_ack_timeout": 0, 00:05:53.096 "ctrlr_loss_timeout_sec": 0, 00:05:53.096 "reconnect_delay_sec": 0, 00:05:53.096 "fast_io_fail_timeout_sec": 0, 00:05:53.096 "disable_auto_failback": false, 00:05:53.096 "generate_uuids": false, 00:05:53.096 "transport_tos": 0, 00:05:53.096 "nvme_error_stat": false, 00:05:53.096 "rdma_srq_size": 0, 00:05:53.096 "io_path_stat": false, 00:05:53.096 "allow_accel_sequence": false, 00:05:53.096 "rdma_max_cq_size": 0, 00:05:53.096 "rdma_cm_event_timeout_ms": 0, 00:05:53.096 "dhchap_digests": [ 00:05:53.096 "sha256", 00:05:53.096 "sha384", 00:05:53.096 "sha512" 00:05:53.096 ], 00:05:53.096 "dhchap_dhgroups": [ 00:05:53.096 "null", 00:05:53.096 "ffdhe2048", 00:05:53.096 "ffdhe3072", 00:05:53.096 "ffdhe4096", 00:05:53.096 "ffdhe6144", 00:05:53.096 "ffdhe8192" 00:05:53.096 ] 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "bdev_nvme_set_hotplug", 00:05:53.096 "params": { 00:05:53.096 "period_us": 100000, 00:05:53.096 "enable": false 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "bdev_wait_for_examine" 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "scsi", 00:05:53.096 "config": null 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "scheduler", 00:05:53.096 "config": [ 00:05:53.096 { 00:05:53.096 "method": "framework_set_scheduler", 00:05:53.096 "params": { 00:05:53.096 "name": "static" 00:05:53.096 } 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "vhost_scsi", 00:05:53.096 "config": [] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "vhost_blk", 00:05:53.096 "config": [] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "ublk", 00:05:53.096 "config": [] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "nbd", 00:05:53.096 "config": [] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "nvmf", 00:05:53.096 "config": [ 00:05:53.096 { 00:05:53.096 "method": "nvmf_set_config", 00:05:53.096 "params": { 00:05:53.096 "discovery_filter": "match_any", 00:05:53.096 "admin_cmd_passthru": { 00:05:53.096 "identify_ctrlr": false 00:05:53.096 }, 00:05:53.096 "dhchap_digests": [ 00:05:53.096 "sha256", 00:05:53.096 "sha384", 00:05:53.096 "sha512" 00:05:53.096 ], 00:05:53.096 "dhchap_dhgroups": [ 00:05:53.096 "null", 00:05:53.096 "ffdhe2048", 00:05:53.096 "ffdhe3072", 00:05:53.096 "ffdhe4096", 00:05:53.096 "ffdhe6144", 00:05:53.096 "ffdhe8192" 00:05:53.096 ] 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "nvmf_set_max_subsystems", 00:05:53.096 "params": { 00:05:53.096 "max_subsystems": 1024 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "nvmf_set_crdt", 00:05:53.096 "params": { 00:05:53.096 "crdt1": 0, 00:05:53.096 "crdt2": 0, 00:05:53.096 "crdt3": 0 00:05:53.096 } 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "method": "nvmf_create_transport", 00:05:53.096 "params": { 00:05:53.096 "trtype": "TCP", 00:05:53.096 "max_queue_depth": 128, 00:05:53.096 "max_io_qpairs_per_ctrlr": 127, 00:05:53.096 "in_capsule_data_size": 4096, 00:05:53.096 "max_io_size": 131072, 00:05:53.096 "io_unit_size": 131072, 00:05:53.096 "max_aq_depth": 128, 00:05:53.096 "num_shared_buffers": 511, 00:05:53.096 "buf_cache_size": 4294967295, 00:05:53.096 "dif_insert_or_strip": false, 00:05:53.096 "zcopy": false, 00:05:53.096 "c2h_success": true, 00:05:53.096 "sock_priority": 0, 00:05:53.096 "abort_timeout_sec": 1, 00:05:53.096 "ack_timeout": 0, 00:05:53.096 "data_wr_pool_size": 0 00:05:53.096 } 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 }, 00:05:53.096 { 00:05:53.096 "subsystem": "iscsi", 00:05:53.096 "config": [ 00:05:53.096 { 00:05:53.096 "method": "iscsi_set_options", 00:05:53.096 "params": { 00:05:53.096 "node_base": "iqn.2016-06.io.spdk", 00:05:53.096 "max_sessions": 128, 00:05:53.096 "max_connections_per_session": 2, 00:05:53.096 "max_queue_depth": 64, 00:05:53.096 "default_time2wait": 2, 00:05:53.096 "default_time2retain": 20, 00:05:53.096 "first_burst_length": 8192, 00:05:53.096 "immediate_data": true, 00:05:53.096 "allow_duplicated_isid": false, 00:05:53.096 "error_recovery_level": 0, 00:05:53.096 "nop_timeout": 60, 00:05:53.096 "nop_in_interval": 30, 00:05:53.096 "disable_chap": false, 00:05:53.096 "require_chap": false, 00:05:53.096 "mutual_chap": false, 00:05:53.096 "chap_group": 0, 00:05:53.096 "max_large_datain_per_connection": 64, 00:05:53.096 "max_r2t_per_connection": 4, 00:05:53.096 "pdu_pool_size": 36864, 00:05:53.096 "immediate_data_pool_size": 16384, 00:05:53.096 "data_out_pool_size": 2048 00:05:53.096 } 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 } 00:05:53.096 ] 00:05:53.096 } 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2089837 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2089837 ']' 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2089837 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2089837 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2089837' 00:05:53.096 killing process with pid 2089837 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2089837 00:05:53.096 15:24:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2089837 00:05:53.356 15:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2090024 00:05:53.356 15:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:53.356 15:24:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2090024 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2090024 ']' 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2090024 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2090024 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2090024' 00:05:58.631 killing process with pid 2090024 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2090024 00:05:58.631 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2090024 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:58.890 00:05:58.890 real 0m6.267s 00:05:58.890 user 0m5.948s 00:05:58.890 sys 0m0.657s 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.890 ************************************ 00:05:58.890 END TEST skip_rpc_with_json 00:05:58.890 ************************************ 00:05:58.890 15:24:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.890 ************************************ 00:05:58.890 START TEST skip_rpc_with_delay 00:05:58.890 ************************************ 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.890 [2024-11-03 15:24:36.577389] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.890 00:05:58.890 real 0m0.075s 00:05:58.890 user 0m0.040s 00:05:58.890 sys 0m0.035s 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.890 15:24:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:58.890 ************************************ 00:05:58.890 END TEST skip_rpc_with_delay 00:05:58.890 ************************************ 00:05:58.890 15:24:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:58.890 15:24:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:58.890 15:24:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.890 15:24:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.890 ************************************ 00:05:58.890 START TEST exit_on_failed_rpc_init 00:05:58.890 ************************************ 00:05:58.890 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:58.890 15:24:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2091129 00:05:58.890 15:24:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2091129 00:05:58.890 15:24:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.890 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2091129 ']' 00:05:58.891 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.891 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.891 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.891 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.891 15:24:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.150 [2024-11-03 15:24:36.715094] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:59.150 [2024-11-03 15:24:36.715141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091129 ] 00:05:59.150 [2024-11-03 15:24:36.792354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.150 [2024-11-03 15:24:36.814827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.410 [2024-11-03 15:24:37.062295] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:59.410 [2024-11-03 15:24:37.062347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091138 ] 00:05:59.410 [2024-11-03 15:24:37.137598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.410 [2024-11-03 15:24:37.159788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.410 [2024-11-03 15:24:37.159842] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:59.410 [2024-11-03 15:24:37.159854] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:59.410 [2024-11-03 15:24:37.159862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.410 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:59.669 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:59.669 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:59.669 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2091129 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2091129 ']' 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2091129 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2091129 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2091129' 00:05:59.670 killing process with pid 2091129 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2091129 00:05:59.670 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2091129 00:05:59.929 00:05:59.929 real 0m0.887s 00:05:59.929 user 0m0.923s 00:05:59.929 sys 0m0.400s 00:05:59.929 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.929 15:24:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.929 ************************************ 00:05:59.929 END TEST exit_on_failed_rpc_init 00:05:59.929 ************************************ 00:05:59.929 15:24:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:59.929 00:05:59.929 real 0m13.092s 00:05:59.929 user 0m12.234s 00:05:59.929 sys 0m1.722s 00:05:59.929 15:24:37 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.929 15:24:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.929 ************************************ 00:05:59.929 END TEST skip_rpc 00:05:59.929 ************************************ 00:05:59.929 15:24:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:59.929 15:24:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.929 15:24:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.929 15:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:59.929 ************************************ 00:05:59.929 START TEST rpc_client 00:05:59.929 ************************************ 00:05:59.929 15:24:37 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.189 * Looking for test storage... 00:06:00.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:00.189 15:24:37 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.189 15:24:37 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.189 15:24:37 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.189 15:24:37 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:00.189 15:24:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.190 15:24:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.190 --rc genhtml_branch_coverage=1 00:06:00.190 --rc genhtml_function_coverage=1 00:06:00.190 --rc genhtml_legend=1 00:06:00.190 --rc geninfo_all_blocks=1 00:06:00.190 --rc geninfo_unexecuted_blocks=1 00:06:00.190 00:06:00.190 ' 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.190 --rc genhtml_branch_coverage=1 00:06:00.190 --rc genhtml_function_coverage=1 00:06:00.190 --rc genhtml_legend=1 00:06:00.190 --rc geninfo_all_blocks=1 00:06:00.190 --rc geninfo_unexecuted_blocks=1 00:06:00.190 00:06:00.190 ' 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.190 --rc genhtml_branch_coverage=1 00:06:00.190 --rc genhtml_function_coverage=1 00:06:00.190 --rc genhtml_legend=1 00:06:00.190 --rc geninfo_all_blocks=1 00:06:00.190 --rc geninfo_unexecuted_blocks=1 00:06:00.190 00:06:00.190 ' 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.190 --rc genhtml_branch_coverage=1 00:06:00.190 --rc genhtml_function_coverage=1 00:06:00.190 --rc genhtml_legend=1 00:06:00.190 --rc geninfo_all_blocks=1 00:06:00.190 --rc geninfo_unexecuted_blocks=1 00:06:00.190 00:06:00.190 ' 00:06:00.190 15:24:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:00.190 OK 00:06:00.190 15:24:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.190 00:06:00.190 real 0m0.197s 00:06:00.190 user 0m0.104s 00:06:00.190 sys 0m0.106s 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.190 15:24:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.190 ************************************ 00:06:00.190 END TEST rpc_client 00:06:00.190 ************************************ 00:06:00.190 15:24:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.190 15:24:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.190 15:24:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.190 15:24:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.190 ************************************ 00:06:00.190 START TEST json_config 00:06:00.190 ************************************ 00:06:00.190 15:24:37 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.450 15:24:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.450 15:24:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.450 15:24:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.450 15:24:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.450 15:24:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.450 15:24:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:00.450 15:24:38 json_config -- scripts/common.sh@345 -- # : 1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.450 15:24:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.450 15:24:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@353 -- # local d=1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.450 15:24:38 json_config -- scripts/common.sh@355 -- # echo 1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.450 15:24:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@353 -- # local d=2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.450 15:24:38 json_config -- scripts/common.sh@355 -- # echo 2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.450 15:24:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.450 15:24:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.450 15:24:38 json_config -- scripts/common.sh@368 -- # return 0 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.450 15:24:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.450 --rc genhtml_branch_coverage=1 00:06:00.450 --rc genhtml_function_coverage=1 00:06:00.450 --rc genhtml_legend=1 00:06:00.450 --rc geninfo_all_blocks=1 00:06:00.450 --rc geninfo_unexecuted_blocks=1 00:06:00.450 00:06:00.451 ' 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.451 --rc genhtml_branch_coverage=1 00:06:00.451 --rc genhtml_function_coverage=1 00:06:00.451 --rc genhtml_legend=1 00:06:00.451 --rc geninfo_all_blocks=1 00:06:00.451 --rc geninfo_unexecuted_blocks=1 00:06:00.451 00:06:00.451 ' 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.451 --rc genhtml_branch_coverage=1 00:06:00.451 --rc genhtml_function_coverage=1 00:06:00.451 --rc genhtml_legend=1 00:06:00.451 --rc geninfo_all_blocks=1 00:06:00.451 --rc geninfo_unexecuted_blocks=1 00:06:00.451 00:06:00.451 ' 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.451 --rc genhtml_branch_coverage=1 00:06:00.451 --rc genhtml_function_coverage=1 00:06:00.451 --rc genhtml_legend=1 00:06:00.451 --rc geninfo_all_blocks=1 00:06:00.451 --rc geninfo_unexecuted_blocks=1 00:06:00.451 00:06:00.451 ' 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:00.451 15:24:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.451 15:24:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.451 15:24:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.451 15:24:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.451 15:24:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.451 15:24:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.451 15:24:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.451 15:24:38 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.451 15:24:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@51 -- # : 0 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.451 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.451 15:24:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:00.451 INFO: JSON configuration test init 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.451 15:24:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:00.451 15:24:38 json_config -- json_config/common.sh@9 -- # local app=target 00:06:00.451 15:24:38 json_config -- json_config/common.sh@10 -- # shift 00:06:00.451 15:24:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.451 15:24:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.451 15:24:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.451 15:24:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.451 15:24:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.451 15:24:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2091501 00:06:00.451 15:24:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:00.451 15:24:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.451 Waiting for target to run... 00:06:00.451 15:24:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2091501 /var/tmp/spdk_tgt.sock 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 2091501 ']' 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.451 15:24:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.451 [2024-11-03 15:24:38.193159] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:00.451 [2024-11-03 15:24:38.193218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091501 ] 00:06:00.711 [2024-11-03 15:24:38.477401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.711 [2024-11-03 15:24:38.488956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:01.280 15:24:39 json_config -- json_config/common.sh@26 -- # echo '' 00:06:01.280 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:01.280 15:24:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.280 15:24:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:01.280 15:24:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:04.570 15:24:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.570 15:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:04.570 15:24:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@54 -- # sort 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:04.570 15:24:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:04.570 15:24:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.570 15:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:04.828 15:24:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.828 15:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:06:04.828 15:24:42 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.829 15:24:42 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:04.829 15:24:42 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:04.829 15:24:42 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:06:04.829 15:24:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@320 -- # e810=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@321 -- # x722=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@322 -- # mlx=() 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:12.949 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:12.949 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:12.949 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:12.949 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@62 -- # uname 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:12.949 15:24:49 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:12.950 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:12.950 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:12.950 altname enp217s0f0np0 00:06:12.950 altname ens818f0np0 00:06:12.950 inet 192.168.100.8/24 scope global mlx_0_0 00:06:12.950 valid_lft forever preferred_lft forever 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:12.950 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:12.950 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:12.950 altname enp217s0f1np1 00:06:12.950 altname ens818f1np1 00:06:12.950 inet 192.168.100.9/24 scope global mlx_0_1 00:06:12.950 valid_lft forever preferred_lft forever 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@450 -- # return 0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:12.950 192.168.100.9' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:12.950 192.168.100.9' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@485 -- # head -n 1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:12.950 192.168.100.9' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@486 -- # head -n 1 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:12.950 15:24:49 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:12.950 15:24:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:06:12.950 15:24:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.950 15:24:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.950 MallocForNvmf0 00:06:12.950 15:24:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:12.950 15:24:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:12.950 MallocForNvmf1 00:06:12.950 15:24:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:12.950 15:24:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:12.950 [2024-11-03 15:24:50.072425] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:12.950 [2024-11-03 15:24:50.100168] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff2db0/0x1ee4780) succeed. 00:06:12.950 [2024-11-03 15:24:50.112610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff5fb0/0x1f25e20) succeed. 00:06:12.950 15:24:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.950 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.950 15:24:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.950 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.950 15:24:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.950 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.950 15:24:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:12.950 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:13.209 [2024-11-03 15:24:50.859091] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:13.209 15:24:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:13.209 15:24:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.209 15:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.209 15:24:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:13.209 15:24:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.209 15:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.209 15:24:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:13.209 15:24:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.209 15:24:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.467 MallocBdevForConfigChangeCheck 00:06:13.467 15:24:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:13.468 15:24:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.468 15:24:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.468 15:24:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:13.468 15:24:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.726 15:24:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:13.726 INFO: shutting down applications... 00:06:13.726 15:24:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:13.726 15:24:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:13.726 15:24:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:13.726 15:24:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:16.259 Calling clear_iscsi_subsystem 00:06:16.259 Calling clear_nvmf_subsystem 00:06:16.259 Calling clear_nbd_subsystem 00:06:16.259 Calling clear_ublk_subsystem 00:06:16.259 Calling clear_vhost_blk_subsystem 00:06:16.259 Calling clear_vhost_scsi_subsystem 00:06:16.259 Calling clear_bdev_subsystem 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:16.259 15:24:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:16.518 15:24:54 json_config -- json_config/json_config.sh@352 -- # break 00:06:16.518 15:24:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:16.518 15:24:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:16.518 15:24:54 json_config -- json_config/common.sh@31 -- # local app=target 00:06:16.518 15:24:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.518 15:24:54 json_config -- json_config/common.sh@35 -- # [[ -n 2091501 ]] 00:06:16.518 15:24:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2091501 00:06:16.518 15:24:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.518 15:24:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.518 15:24:54 json_config -- json_config/common.sh@41 -- # kill -0 2091501 00:06:16.518 15:24:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.087 15:24:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.087 15:24:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.087 15:24:54 json_config -- json_config/common.sh@41 -- # kill -0 2091501 00:06:17.087 15:24:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.087 15:24:54 json_config -- json_config/common.sh@43 -- # break 00:06:17.087 15:24:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.087 15:24:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.087 SPDK target shutdown done 00:06:17.087 15:24:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:17.087 INFO: relaunching applications... 00:06:17.087 15:24:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.087 15:24:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:17.087 15:24:54 json_config -- json_config/common.sh@10 -- # shift 00:06:17.087 15:24:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.087 15:24:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.087 15:24:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.087 15:24:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.087 15:24:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.087 15:24:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2096551 00:06:17.087 15:24:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.087 Waiting for target to run... 00:06:17.087 15:24:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.087 15:24:54 json_config -- json_config/common.sh@25 -- # waitforlisten 2096551 /var/tmp/spdk_tgt.sock 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@833 -- # '[' -z 2096551 ']' 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.087 15:24:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.087 [2024-11-03 15:24:54.822947] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:17.087 [2024-11-03 15:24:54.823021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096551 ] 00:06:17.655 [2024-11-03 15:24:55.271656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.655 [2024-11-03 15:24:55.293058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.945 [2024-11-03 15:24:58.340846] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x202dac0/0x1f1bd50) succeed. 00:06:20.945 [2024-11-03 15:24:58.351466] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x202dc90/0x1f5d3f0) succeed. 00:06:20.945 [2024-11-03 15:24:58.403117] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:21.513 15:24:59 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.513 15:24:59 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:21.513 15:24:59 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.513 00:06:21.513 15:24:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:21.513 15:24:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.513 INFO: Checking if target configuration is the same... 00:06:21.513 15:24:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.513 15:24:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:21.513 15:24:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.513 + '[' 2 -ne 2 ']' 00:06:21.513 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:21.513 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:21.513 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:21.513 +++ basename /dev/fd/62 00:06:21.513 ++ mktemp /tmp/62.XXX 00:06:21.513 + tmp_file_1=/tmp/62.VEY 00:06:21.513 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.513 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.513 + tmp_file_2=/tmp/spdk_tgt_config.json.6JV 00:06:21.513 + ret=0 00:06:21.513 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.772 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.772 + diff -u /tmp/62.VEY /tmp/spdk_tgt_config.json.6JV 00:06:21.772 + echo 'INFO: JSON config files are the same' 00:06:21.772 INFO: JSON config files are the same 00:06:21.772 + rm /tmp/62.VEY /tmp/spdk_tgt_config.json.6JV 00:06:21.772 + exit 0 00:06:21.772 15:24:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:21.772 15:24:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:21.772 INFO: changing configuration and checking if this can be detected... 00:06:21.772 15:24:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.772 15:24:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.030 15:24:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.030 15:24:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:22.030 15:24:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.030 + '[' 2 -ne 2 ']' 00:06:22.030 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.030 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:22.030 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:22.030 +++ basename /dev/fd/62 00:06:22.030 ++ mktemp /tmp/62.XXX 00:06:22.030 + tmp_file_1=/tmp/62.NdJ 00:06:22.030 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.030 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.030 + tmp_file_2=/tmp/spdk_tgt_config.json.Pyf 00:06:22.030 + ret=0 00:06:22.030 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.289 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.289 + diff -u /tmp/62.NdJ /tmp/spdk_tgt_config.json.Pyf 00:06:22.289 + ret=1 00:06:22.289 + echo '=== Start of file: /tmp/62.NdJ ===' 00:06:22.289 + cat /tmp/62.NdJ 00:06:22.289 + echo '=== End of file: /tmp/62.NdJ ===' 00:06:22.289 + echo '' 00:06:22.289 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Pyf ===' 00:06:22.289 + cat /tmp/spdk_tgt_config.json.Pyf 00:06:22.289 + echo '=== End of file: /tmp/spdk_tgt_config.json.Pyf ===' 00:06:22.289 + echo '' 00:06:22.289 + rm /tmp/62.NdJ /tmp/spdk_tgt_config.json.Pyf 00:06:22.289 + exit 1 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:22.289 INFO: configuration change detected. 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 2096551 ]] 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:22.289 15:24:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.289 15:24:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.289 15:25:00 json_config -- json_config/json_config.sh@330 -- # killprocess 2096551 00:06:22.289 15:25:00 json_config -- common/autotest_common.sh@952 -- # '[' -z 2096551 ']' 00:06:22.289 15:25:00 json_config -- common/autotest_common.sh@956 -- # kill -0 2096551 00:06:22.289 15:25:00 json_config -- common/autotest_common.sh@957 -- # uname 00:06:22.289 15:25:00 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.289 15:25:00 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2096551 00:06:22.548 15:25:00 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.549 15:25:00 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.549 15:25:00 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2096551' 00:06:22.549 killing process with pid 2096551 00:06:22.549 15:25:00 json_config -- common/autotest_common.sh@971 -- # kill 2096551 00:06:22.549 15:25:00 json_config -- common/autotest_common.sh@976 -- # wait 2096551 00:06:25.086 15:25:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.086 15:25:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:25.086 15:25:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.086 15:25:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.086 15:25:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:25.086 15:25:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:25.086 INFO: Success 00:06:25.086 15:25:02 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@121 -- # sync 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.086 15:25:02 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:06:25.086 00:06:25.086 real 0m24.575s 00:06:25.086 user 0m27.219s 00:06:25.086 sys 0m7.713s 00:06:25.086 15:25:02 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.086 15:25:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.086 ************************************ 00:06:25.086 END TEST json_config 00:06:25.086 ************************************ 00:06:25.086 15:25:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.086 15:25:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.086 15:25:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.086 15:25:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.086 ************************************ 00:06:25.086 START TEST json_config_extra_key 00:06:25.086 ************************************ 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.086 15:25:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.086 --rc genhtml_branch_coverage=1 00:06:25.086 --rc genhtml_function_coverage=1 00:06:25.086 --rc genhtml_legend=1 00:06:25.086 --rc geninfo_all_blocks=1 00:06:25.086 --rc geninfo_unexecuted_blocks=1 00:06:25.086 00:06:25.086 ' 00:06:25.086 15:25:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.086 --rc genhtml_branch_coverage=1 00:06:25.086 --rc genhtml_function_coverage=1 00:06:25.086 --rc genhtml_legend=1 00:06:25.086 --rc geninfo_all_blocks=1 00:06:25.086 --rc geninfo_unexecuted_blocks=1 00:06:25.086 00:06:25.086 ' 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.087 --rc genhtml_branch_coverage=1 00:06:25.087 --rc genhtml_function_coverage=1 00:06:25.087 --rc genhtml_legend=1 00:06:25.087 --rc geninfo_all_blocks=1 00:06:25.087 --rc geninfo_unexecuted_blocks=1 00:06:25.087 00:06:25.087 ' 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.087 --rc genhtml_branch_coverage=1 00:06:25.087 --rc genhtml_function_coverage=1 00:06:25.087 --rc genhtml_legend=1 00:06:25.087 --rc geninfo_all_blocks=1 00:06:25.087 --rc geninfo_unexecuted_blocks=1 00:06:25.087 00:06:25.087 ' 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:25.087 15:25:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.087 15:25:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.087 15:25:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.087 15:25:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.087 15:25:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.087 15:25:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.087 15:25:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.087 15:25:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.087 15:25:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.087 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.087 15:25:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.087 INFO: launching applications... 00:06:25.087 15:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2098093 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.087 Waiting for target to run... 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2098093 /var/tmp/spdk_tgt.sock 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2098093 ']' 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.087 15:25:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.087 15:25:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.087 [2024-11-03 15:25:02.844421] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:25.087 [2024-11-03 15:25:02.844478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098093 ] 00:06:25.346 [2024-11-03 15:25:03.134959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.605 [2024-11-03 15:25:03.147364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.174 15:25:03 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.174 15:25:03 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.174 00:06:26.174 15:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.174 INFO: shutting down applications... 00:06:26.174 15:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2098093 ]] 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2098093 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2098093 00:06:26.174 15:25:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2098093 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.432 15:25:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.432 SPDK target shutdown done 00:06:26.432 15:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.432 Success 00:06:26.432 00:06:26.432 real 0m1.574s 00:06:26.432 user 0m1.308s 00:06:26.432 sys 0m0.444s 00:06:26.432 15:25:04 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.432 15:25:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.432 ************************************ 00:06:26.432 END TEST json_config_extra_key 00:06:26.432 ************************************ 00:06:26.692 15:25:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.692 15:25:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.692 15:25:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.692 15:25:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.692 ************************************ 00:06:26.692 START TEST alias_rpc 00:06:26.692 ************************************ 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.692 * Looking for test storage... 00:06:26.692 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.692 15:25:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.692 --rc genhtml_branch_coverage=1 00:06:26.692 --rc genhtml_function_coverage=1 00:06:26.692 --rc genhtml_legend=1 00:06:26.692 --rc geninfo_all_blocks=1 00:06:26.692 --rc geninfo_unexecuted_blocks=1 00:06:26.692 00:06:26.692 ' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.692 --rc genhtml_branch_coverage=1 00:06:26.692 --rc genhtml_function_coverage=1 00:06:26.692 --rc genhtml_legend=1 00:06:26.692 --rc geninfo_all_blocks=1 00:06:26.692 --rc geninfo_unexecuted_blocks=1 00:06:26.692 00:06:26.692 ' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.692 --rc genhtml_branch_coverage=1 00:06:26.692 --rc genhtml_function_coverage=1 00:06:26.692 --rc genhtml_legend=1 00:06:26.692 --rc geninfo_all_blocks=1 00:06:26.692 --rc geninfo_unexecuted_blocks=1 00:06:26.692 00:06:26.692 ' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.692 --rc genhtml_branch_coverage=1 00:06:26.692 --rc genhtml_function_coverage=1 00:06:26.692 --rc genhtml_legend=1 00:06:26.692 --rc geninfo_all_blocks=1 00:06:26.692 --rc geninfo_unexecuted_blocks=1 00:06:26.692 00:06:26.692 ' 00:06:26.692 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.692 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2098418 00:06:26.692 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.692 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2098418 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2098418 ']' 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.692 15:25:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.952 [2024-11-03 15:25:04.511962] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:26.952 [2024-11-03 15:25:04.512018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098418 ] 00:06:26.952 [2024-11-03 15:25:04.587654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.952 [2024-11-03 15:25:04.610271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.210 15:25:04 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.210 15:25:04 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:27.210 15:25:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:27.469 15:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2098418 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2098418 ']' 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2098418 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2098418 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2098418' 00:06:27.469 killing process with pid 2098418 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@971 -- # kill 2098418 00:06:27.469 15:25:05 alias_rpc -- common/autotest_common.sh@976 -- # wait 2098418 00:06:27.727 00:06:27.727 real 0m1.108s 00:06:27.727 user 0m1.085s 00:06:27.727 sys 0m0.467s 00:06:27.727 15:25:05 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.727 15:25:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.727 ************************************ 00:06:27.727 END TEST alias_rpc 00:06:27.727 ************************************ 00:06:27.727 15:25:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:27.727 15:25:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.727 15:25:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.727 15:25:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.727 15:25:05 -- common/autotest_common.sh@10 -- # set +x 00:06:27.728 ************************************ 00:06:27.728 START TEST spdkcli_tcp 00:06:27.728 ************************************ 00:06:27.728 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.986 * Looking for test storage... 00:06:27.986 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:27.986 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.986 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.986 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.986 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.986 15:25:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.987 15:25:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.987 --rc genhtml_branch_coverage=1 00:06:27.987 --rc genhtml_function_coverage=1 00:06:27.987 --rc genhtml_legend=1 00:06:27.987 --rc geninfo_all_blocks=1 00:06:27.987 --rc geninfo_unexecuted_blocks=1 00:06:27.987 00:06:27.987 ' 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.987 --rc genhtml_branch_coverage=1 00:06:27.987 --rc genhtml_function_coverage=1 00:06:27.987 --rc genhtml_legend=1 00:06:27.987 --rc geninfo_all_blocks=1 00:06:27.987 --rc geninfo_unexecuted_blocks=1 00:06:27.987 00:06:27.987 ' 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.987 --rc genhtml_branch_coverage=1 00:06:27.987 --rc genhtml_function_coverage=1 00:06:27.987 --rc genhtml_legend=1 00:06:27.987 --rc geninfo_all_blocks=1 00:06:27.987 --rc geninfo_unexecuted_blocks=1 00:06:27.987 00:06:27.987 ' 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.987 --rc genhtml_branch_coverage=1 00:06:27.987 --rc genhtml_function_coverage=1 00:06:27.987 --rc genhtml_legend=1 00:06:27.987 --rc geninfo_all_blocks=1 00:06:27.987 --rc geninfo_unexecuted_blocks=1 00:06:27.987 00:06:27.987 ' 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2098742 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2098742 00:06:27.987 15:25:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2098742 ']' 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.987 15:25:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.987 [2024-11-03 15:25:05.713339] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:27.987 [2024-11-03 15:25:05.713391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098742 ] 00:06:28.246 [2024-11-03 15:25:05.790083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.246 [2024-11-03 15:25:05.815985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.246 [2024-11-03 15:25:05.815988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.246 15:25:06 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.246 15:25:06 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:28.246 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2098746 00:06:28.246 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:28.246 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.505 [ 00:06:28.505 "bdev_malloc_delete", 00:06:28.505 "bdev_malloc_create", 00:06:28.505 "bdev_null_resize", 00:06:28.505 "bdev_null_delete", 00:06:28.505 "bdev_null_create", 00:06:28.505 "bdev_nvme_cuse_unregister", 00:06:28.505 "bdev_nvme_cuse_register", 00:06:28.505 "bdev_opal_new_user", 00:06:28.505 "bdev_opal_set_lock_state", 00:06:28.505 "bdev_opal_delete", 00:06:28.505 "bdev_opal_get_info", 00:06:28.505 "bdev_opal_create", 00:06:28.505 "bdev_nvme_opal_revert", 00:06:28.505 "bdev_nvme_opal_init", 00:06:28.505 "bdev_nvme_send_cmd", 00:06:28.505 "bdev_nvme_set_keys", 00:06:28.505 "bdev_nvme_get_path_iostat", 00:06:28.505 "bdev_nvme_get_mdns_discovery_info", 00:06:28.505 "bdev_nvme_stop_mdns_discovery", 00:06:28.505 "bdev_nvme_start_mdns_discovery", 00:06:28.505 "bdev_nvme_set_multipath_policy", 00:06:28.505 "bdev_nvme_set_preferred_path", 00:06:28.505 "bdev_nvme_get_io_paths", 00:06:28.505 "bdev_nvme_remove_error_injection", 00:06:28.505 "bdev_nvme_add_error_injection", 00:06:28.505 "bdev_nvme_get_discovery_info", 00:06:28.505 "bdev_nvme_stop_discovery", 00:06:28.505 "bdev_nvme_start_discovery", 00:06:28.505 "bdev_nvme_get_controller_health_info", 00:06:28.505 "bdev_nvme_disable_controller", 00:06:28.505 "bdev_nvme_enable_controller", 00:06:28.505 "bdev_nvme_reset_controller", 00:06:28.505 "bdev_nvme_get_transport_statistics", 00:06:28.505 "bdev_nvme_apply_firmware", 00:06:28.505 "bdev_nvme_detach_controller", 00:06:28.505 "bdev_nvme_get_controllers", 00:06:28.505 "bdev_nvme_attach_controller", 00:06:28.505 "bdev_nvme_set_hotplug", 00:06:28.505 "bdev_nvme_set_options", 00:06:28.505 "bdev_passthru_delete", 00:06:28.505 "bdev_passthru_create", 00:06:28.505 "bdev_lvol_set_parent_bdev", 00:06:28.505 "bdev_lvol_set_parent", 00:06:28.505 "bdev_lvol_check_shallow_copy", 00:06:28.505 "bdev_lvol_start_shallow_copy", 00:06:28.505 "bdev_lvol_grow_lvstore", 00:06:28.505 "bdev_lvol_get_lvols", 00:06:28.505 "bdev_lvol_get_lvstores", 00:06:28.505 "bdev_lvol_delete", 00:06:28.505 "bdev_lvol_set_read_only", 00:06:28.505 "bdev_lvol_resize", 00:06:28.505 "bdev_lvol_decouple_parent", 00:06:28.505 "bdev_lvol_inflate", 00:06:28.505 "bdev_lvol_rename", 00:06:28.505 "bdev_lvol_clone_bdev", 00:06:28.505 "bdev_lvol_clone", 00:06:28.505 "bdev_lvol_snapshot", 00:06:28.505 "bdev_lvol_create", 00:06:28.505 "bdev_lvol_delete_lvstore", 00:06:28.505 "bdev_lvol_rename_lvstore", 00:06:28.505 "bdev_lvol_create_lvstore", 00:06:28.505 "bdev_raid_set_options", 00:06:28.505 "bdev_raid_remove_base_bdev", 00:06:28.505 "bdev_raid_add_base_bdev", 00:06:28.505 "bdev_raid_delete", 00:06:28.505 "bdev_raid_create", 00:06:28.505 "bdev_raid_get_bdevs", 00:06:28.505 "bdev_error_inject_error", 00:06:28.505 "bdev_error_delete", 00:06:28.505 "bdev_error_create", 00:06:28.505 "bdev_split_delete", 00:06:28.505 "bdev_split_create", 00:06:28.505 "bdev_delay_delete", 00:06:28.505 "bdev_delay_create", 00:06:28.505 "bdev_delay_update_latency", 00:06:28.505 "bdev_zone_block_delete", 00:06:28.505 "bdev_zone_block_create", 00:06:28.505 "blobfs_create", 00:06:28.505 "blobfs_detect", 00:06:28.505 "blobfs_set_cache_size", 00:06:28.505 "bdev_aio_delete", 00:06:28.505 "bdev_aio_rescan", 00:06:28.505 "bdev_aio_create", 00:06:28.505 "bdev_ftl_set_property", 00:06:28.505 "bdev_ftl_get_properties", 00:06:28.505 "bdev_ftl_get_stats", 00:06:28.505 "bdev_ftl_unmap", 00:06:28.505 "bdev_ftl_unload", 00:06:28.505 "bdev_ftl_delete", 00:06:28.505 "bdev_ftl_load", 00:06:28.505 "bdev_ftl_create", 00:06:28.505 "bdev_virtio_attach_controller", 00:06:28.505 "bdev_virtio_scsi_get_devices", 00:06:28.505 "bdev_virtio_detach_controller", 00:06:28.505 "bdev_virtio_blk_set_hotplug", 00:06:28.505 "bdev_iscsi_delete", 00:06:28.505 "bdev_iscsi_create", 00:06:28.505 "bdev_iscsi_set_options", 00:06:28.505 "accel_error_inject_error", 00:06:28.505 "ioat_scan_accel_module", 00:06:28.505 "dsa_scan_accel_module", 00:06:28.505 "iaa_scan_accel_module", 00:06:28.505 "keyring_file_remove_key", 00:06:28.505 "keyring_file_add_key", 00:06:28.505 "keyring_linux_set_options", 00:06:28.505 "fsdev_aio_delete", 00:06:28.505 "fsdev_aio_create", 00:06:28.505 "iscsi_get_histogram", 00:06:28.505 "iscsi_enable_histogram", 00:06:28.505 "iscsi_set_options", 00:06:28.505 "iscsi_get_auth_groups", 00:06:28.505 "iscsi_auth_group_remove_secret", 00:06:28.505 "iscsi_auth_group_add_secret", 00:06:28.505 "iscsi_delete_auth_group", 00:06:28.505 "iscsi_create_auth_group", 00:06:28.505 "iscsi_set_discovery_auth", 00:06:28.505 "iscsi_get_options", 00:06:28.505 "iscsi_target_node_request_logout", 00:06:28.505 "iscsi_target_node_set_redirect", 00:06:28.505 "iscsi_target_node_set_auth", 00:06:28.505 "iscsi_target_node_add_lun", 00:06:28.505 "iscsi_get_stats", 00:06:28.505 "iscsi_get_connections", 00:06:28.505 "iscsi_portal_group_set_auth", 00:06:28.505 "iscsi_start_portal_group", 00:06:28.505 "iscsi_delete_portal_group", 00:06:28.505 "iscsi_create_portal_group", 00:06:28.505 "iscsi_get_portal_groups", 00:06:28.505 "iscsi_delete_target_node", 00:06:28.505 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.505 "iscsi_target_node_add_pg_ig_maps", 00:06:28.505 "iscsi_create_target_node", 00:06:28.505 "iscsi_get_target_nodes", 00:06:28.505 "iscsi_delete_initiator_group", 00:06:28.505 "iscsi_initiator_group_remove_initiators", 00:06:28.505 "iscsi_initiator_group_add_initiators", 00:06:28.505 "iscsi_create_initiator_group", 00:06:28.505 "iscsi_get_initiator_groups", 00:06:28.505 "nvmf_set_crdt", 00:06:28.505 "nvmf_set_config", 00:06:28.505 "nvmf_set_max_subsystems", 00:06:28.505 "nvmf_stop_mdns_prr", 00:06:28.506 "nvmf_publish_mdns_prr", 00:06:28.506 "nvmf_subsystem_get_listeners", 00:06:28.506 "nvmf_subsystem_get_qpairs", 00:06:28.506 "nvmf_subsystem_get_controllers", 00:06:28.506 "nvmf_get_stats", 00:06:28.506 "nvmf_get_transports", 00:06:28.506 "nvmf_create_transport", 00:06:28.506 "nvmf_get_targets", 00:06:28.506 "nvmf_delete_target", 00:06:28.506 "nvmf_create_target", 00:06:28.506 "nvmf_subsystem_allow_any_host", 00:06:28.506 "nvmf_subsystem_set_keys", 00:06:28.506 "nvmf_subsystem_remove_host", 00:06:28.506 "nvmf_subsystem_add_host", 00:06:28.506 "nvmf_ns_remove_host", 00:06:28.506 "nvmf_ns_add_host", 00:06:28.506 "nvmf_subsystem_remove_ns", 00:06:28.506 "nvmf_subsystem_set_ns_ana_group", 00:06:28.506 "nvmf_subsystem_add_ns", 00:06:28.506 "nvmf_subsystem_listener_set_ana_state", 00:06:28.506 "nvmf_discovery_get_referrals", 00:06:28.506 "nvmf_discovery_remove_referral", 00:06:28.506 "nvmf_discovery_add_referral", 00:06:28.506 "nvmf_subsystem_remove_listener", 00:06:28.506 "nvmf_subsystem_add_listener", 00:06:28.506 "nvmf_delete_subsystem", 00:06:28.506 "nvmf_create_subsystem", 00:06:28.506 "nvmf_get_subsystems", 00:06:28.506 "env_dpdk_get_mem_stats", 00:06:28.506 "nbd_get_disks", 00:06:28.506 "nbd_stop_disk", 00:06:28.506 "nbd_start_disk", 00:06:28.506 "ublk_recover_disk", 00:06:28.506 "ublk_get_disks", 00:06:28.506 "ublk_stop_disk", 00:06:28.506 "ublk_start_disk", 00:06:28.506 "ublk_destroy_target", 00:06:28.506 "ublk_create_target", 00:06:28.506 "virtio_blk_create_transport", 00:06:28.506 "virtio_blk_get_transports", 00:06:28.506 "vhost_controller_set_coalescing", 00:06:28.506 "vhost_get_controllers", 00:06:28.506 "vhost_delete_controller", 00:06:28.506 "vhost_create_blk_controller", 00:06:28.506 "vhost_scsi_controller_remove_target", 00:06:28.506 "vhost_scsi_controller_add_target", 00:06:28.506 "vhost_start_scsi_controller", 00:06:28.506 "vhost_create_scsi_controller", 00:06:28.506 "thread_set_cpumask", 00:06:28.506 "scheduler_set_options", 00:06:28.506 "framework_get_governor", 00:06:28.506 "framework_get_scheduler", 00:06:28.506 "framework_set_scheduler", 00:06:28.506 "framework_get_reactors", 00:06:28.506 "thread_get_io_channels", 00:06:28.506 "thread_get_pollers", 00:06:28.506 "thread_get_stats", 00:06:28.506 "framework_monitor_context_switch", 00:06:28.506 "spdk_kill_instance", 00:06:28.506 "log_enable_timestamps", 00:06:28.506 "log_get_flags", 00:06:28.506 "log_clear_flag", 00:06:28.506 "log_set_flag", 00:06:28.506 "log_get_level", 00:06:28.506 "log_set_level", 00:06:28.506 "log_get_print_level", 00:06:28.506 "log_set_print_level", 00:06:28.506 "framework_enable_cpumask_locks", 00:06:28.506 "framework_disable_cpumask_locks", 00:06:28.506 "framework_wait_init", 00:06:28.506 "framework_start_init", 00:06:28.506 "scsi_get_devices", 00:06:28.506 "bdev_get_histogram", 00:06:28.506 "bdev_enable_histogram", 00:06:28.506 "bdev_set_qos_limit", 00:06:28.506 "bdev_set_qd_sampling_period", 00:06:28.506 "bdev_get_bdevs", 00:06:28.506 "bdev_reset_iostat", 00:06:28.506 "bdev_get_iostat", 00:06:28.506 "bdev_examine", 00:06:28.506 "bdev_wait_for_examine", 00:06:28.506 "bdev_set_options", 00:06:28.506 "accel_get_stats", 00:06:28.506 "accel_set_options", 00:06:28.506 "accel_set_driver", 00:06:28.506 "accel_crypto_key_destroy", 00:06:28.506 "accel_crypto_keys_get", 00:06:28.506 "accel_crypto_key_create", 00:06:28.506 "accel_assign_opc", 00:06:28.506 "accel_get_module_info", 00:06:28.506 "accel_get_opc_assignments", 00:06:28.506 "vmd_rescan", 00:06:28.506 "vmd_remove_device", 00:06:28.506 "vmd_enable", 00:06:28.506 "sock_get_default_impl", 00:06:28.506 "sock_set_default_impl", 00:06:28.506 "sock_impl_set_options", 00:06:28.506 "sock_impl_get_options", 00:06:28.506 "iobuf_get_stats", 00:06:28.506 "iobuf_set_options", 00:06:28.506 "keyring_get_keys", 00:06:28.506 "framework_get_pci_devices", 00:06:28.506 "framework_get_config", 00:06:28.506 "framework_get_subsystems", 00:06:28.506 "fsdev_set_opts", 00:06:28.506 "fsdev_get_opts", 00:06:28.506 "trace_get_info", 00:06:28.506 "trace_get_tpoint_group_mask", 00:06:28.506 "trace_disable_tpoint_group", 00:06:28.506 "trace_enable_tpoint_group", 00:06:28.506 "trace_clear_tpoint_mask", 00:06:28.506 "trace_set_tpoint_mask", 00:06:28.506 "notify_get_notifications", 00:06:28.506 "notify_get_types", 00:06:28.506 "spdk_get_version", 00:06:28.506 "rpc_get_methods" 00:06:28.506 ] 00:06:28.506 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.506 15:25:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2098742 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2098742 ']' 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2098742 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.506 15:25:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2098742 00:06:28.765 15:25:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.765 15:25:06 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.765 15:25:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2098742' 00:06:28.765 killing process with pid 2098742 00:06:28.765 15:25:06 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2098742 00:06:28.765 15:25:06 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2098742 00:06:29.024 00:06:29.024 real 0m1.132s 00:06:29.024 user 0m1.844s 00:06:29.024 sys 0m0.493s 00:06:29.024 15:25:06 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.024 15:25:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.024 ************************************ 00:06:29.024 END TEST spdkcli_tcp 00:06:29.024 ************************************ 00:06:29.024 15:25:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.024 15:25:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.024 15:25:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.024 15:25:06 -- common/autotest_common.sh@10 -- # set +x 00:06:29.024 ************************************ 00:06:29.024 START TEST dpdk_mem_utility 00:06:29.024 ************************************ 00:06:29.024 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.024 * Looking for test storage... 00:06:29.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.024 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.024 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.024 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.283 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.283 15:25:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.284 15:25:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.284 --rc genhtml_branch_coverage=1 00:06:29.284 --rc genhtml_function_coverage=1 00:06:29.284 --rc genhtml_legend=1 00:06:29.284 --rc geninfo_all_blocks=1 00:06:29.284 --rc geninfo_unexecuted_blocks=1 00:06:29.284 00:06:29.284 ' 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.284 --rc genhtml_branch_coverage=1 00:06:29.284 --rc genhtml_function_coverage=1 00:06:29.284 --rc genhtml_legend=1 00:06:29.284 --rc geninfo_all_blocks=1 00:06:29.284 --rc geninfo_unexecuted_blocks=1 00:06:29.284 00:06:29.284 ' 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.284 --rc genhtml_branch_coverage=1 00:06:29.284 --rc genhtml_function_coverage=1 00:06:29.284 --rc genhtml_legend=1 00:06:29.284 --rc geninfo_all_blocks=1 00:06:29.284 --rc geninfo_unexecuted_blocks=1 00:06:29.284 00:06:29.284 ' 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.284 --rc genhtml_branch_coverage=1 00:06:29.284 --rc genhtml_function_coverage=1 00:06:29.284 --rc genhtml_legend=1 00:06:29.284 --rc geninfo_all_blocks=1 00:06:29.284 --rc geninfo_unexecuted_blocks=1 00:06:29.284 00:06:29.284 ' 00:06:29.284 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.284 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2098961 00:06:29.284 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2098961 00:06:29.284 15:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2098961 ']' 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.284 15:25:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.284 [2024-11-03 15:25:06.925903] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:29.284 [2024-11-03 15:25:06.925962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098961 ] 00:06:29.284 [2024-11-03 15:25:07.004099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.284 [2024-11-03 15:25:07.025712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.544 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.544 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:29.544 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:29.544 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:29.544 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.544 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.544 { 00:06:29.544 "filename": "/tmp/spdk_mem_dump.txt" 00:06:29.544 } 00:06:29.544 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.544 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.544 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:29.544 1 heaps totaling size 810.000000 MiB 00:06:29.544 size: 810.000000 MiB heap id: 0 00:06:29.544 end heaps---------- 00:06:29.544 9 mempools totaling size 595.772034 MiB 00:06:29.544 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:29.544 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:29.544 size: 92.545471 MiB name: bdev_io_2098961 00:06:29.544 size: 50.003479 MiB name: msgpool_2098961 00:06:29.544 size: 36.509338 MiB name: fsdev_io_2098961 00:06:29.544 size: 21.763794 MiB name: PDU_Pool 00:06:29.544 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:29.544 size: 4.133484 MiB name: evtpool_2098961 00:06:29.544 size: 0.026123 MiB name: Session_Pool 00:06:29.544 end mempools------- 00:06:29.544 6 memzones totaling size 4.142822 MiB 00:06:29.544 size: 1.000366 MiB name: RG_ring_0_2098961 00:06:29.544 size: 1.000366 MiB name: RG_ring_1_2098961 00:06:29.544 size: 1.000366 MiB name: RG_ring_4_2098961 00:06:29.544 size: 1.000366 MiB name: RG_ring_5_2098961 00:06:29.544 size: 0.125366 MiB name: RG_ring_2_2098961 00:06:29.544 size: 0.015991 MiB name: RG_ring_3_2098961 00:06:29.544 end memzones------- 00:06:29.544 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.544 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:29.544 list of free elements. size: 10.862488 MiB 00:06:29.544 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:29.544 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:29.544 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:29.544 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:29.544 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:29.544 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:29.544 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:29.544 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:29.544 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:29.544 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:29.544 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:29.544 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:29.544 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:29.544 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:29.544 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:29.544 list of standard malloc elements. size: 199.218628 MiB 00:06:29.544 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:29.544 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:29.544 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:29.544 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:29.544 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.544 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.544 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:29.544 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.544 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:29.544 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:29.544 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:29.544 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:29.544 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:29.544 list of memzone associated elements. size: 599.918884 MiB 00:06:29.544 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:29.544 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.544 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:29.544 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.544 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:29.544 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2098961_0 00:06:29.544 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:29.544 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2098961_0 00:06:29.544 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:29.544 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2098961_0 00:06:29.544 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:29.544 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.544 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:29.544 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.544 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:29.544 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2098961_0 00:06:29.544 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:29.544 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2098961 00:06:29.544 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.544 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2098961 00:06:29.544 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:29.544 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.544 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:29.545 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.545 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:29.545 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.545 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:29.545 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.545 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:29.545 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2098961 00:06:29.545 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:29.545 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2098961 00:06:29.545 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:29.545 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2098961 00:06:29.545 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:29.545 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2098961 00:06:29.545 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:29.545 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2098961 00:06:29.545 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:29.545 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2098961 00:06:29.545 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:29.545 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.545 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:29.545 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.545 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:29.545 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.545 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:29.545 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2098961 00:06:29.545 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:29.545 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2098961 00:06:29.545 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:29.545 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.545 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:29.545 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.545 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:29.545 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2098961 00:06:29.545 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:29.545 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.545 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:29.545 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2098961 00:06:29.545 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:29.545 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2098961 00:06:29.545 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:29.545 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2098961 00:06:29.545 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:29.545 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.545 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.545 15:25:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2098961 00:06:29.545 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2098961 ']' 00:06:29.545 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2098961 00:06:29.545 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:29.545 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2098961 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2098961' 00:06:29.804 killing process with pid 2098961 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2098961 00:06:29.804 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2098961 00:06:30.064 00:06:30.064 real 0m1.004s 00:06:30.064 user 0m0.899s 00:06:30.064 sys 0m0.461s 00:06:30.064 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.064 15:25:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.064 ************************************ 00:06:30.064 END TEST dpdk_mem_utility 00:06:30.064 ************************************ 00:06:30.064 15:25:07 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:30.064 15:25:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.064 15:25:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.064 15:25:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.064 ************************************ 00:06:30.064 START TEST event 00:06:30.064 ************************************ 00:06:30.064 15:25:07 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:30.064 * Looking for test storage... 00:06:30.324 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.324 15:25:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.324 15:25:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.324 15:25:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.324 15:25:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.324 15:25:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.324 15:25:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.324 15:25:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.324 15:25:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.324 15:25:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.324 15:25:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.324 15:25:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.324 15:25:07 event -- scripts/common.sh@344 -- # case "$op" in 00:06:30.324 15:25:07 event -- scripts/common.sh@345 -- # : 1 00:06:30.324 15:25:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.324 15:25:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.324 15:25:07 event -- scripts/common.sh@365 -- # decimal 1 00:06:30.324 15:25:07 event -- scripts/common.sh@353 -- # local d=1 00:06:30.324 15:25:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.324 15:25:07 event -- scripts/common.sh@355 -- # echo 1 00:06:30.324 15:25:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.324 15:25:07 event -- scripts/common.sh@366 -- # decimal 2 00:06:30.324 15:25:07 event -- scripts/common.sh@353 -- # local d=2 00:06:30.324 15:25:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.324 15:25:07 event -- scripts/common.sh@355 -- # echo 2 00:06:30.324 15:25:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.324 15:25:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.324 15:25:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.324 15:25:07 event -- scripts/common.sh@368 -- # return 0 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.324 --rc genhtml_branch_coverage=1 00:06:30.324 --rc genhtml_function_coverage=1 00:06:30.324 --rc genhtml_legend=1 00:06:30.324 --rc geninfo_all_blocks=1 00:06:30.324 --rc geninfo_unexecuted_blocks=1 00:06:30.324 00:06:30.324 ' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.324 --rc genhtml_branch_coverage=1 00:06:30.324 --rc genhtml_function_coverage=1 00:06:30.324 --rc genhtml_legend=1 00:06:30.324 --rc geninfo_all_blocks=1 00:06:30.324 --rc geninfo_unexecuted_blocks=1 00:06:30.324 00:06:30.324 ' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.324 --rc genhtml_branch_coverage=1 00:06:30.324 --rc genhtml_function_coverage=1 00:06:30.324 --rc genhtml_legend=1 00:06:30.324 --rc geninfo_all_blocks=1 00:06:30.324 --rc geninfo_unexecuted_blocks=1 00:06:30.324 00:06:30.324 ' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.324 --rc genhtml_branch_coverage=1 00:06:30.324 --rc genhtml_function_coverage=1 00:06:30.324 --rc genhtml_legend=1 00:06:30.324 --rc geninfo_all_blocks=1 00:06:30.324 --rc geninfo_unexecuted_blocks=1 00:06:30.324 00:06:30.324 ' 00:06:30.324 15:25:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.324 15:25:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.324 15:25:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:30.324 15:25:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.324 15:25:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.324 ************************************ 00:06:30.324 START TEST event_perf 00:06:30.324 ************************************ 00:06:30.324 15:25:07 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.324 Running I/O for 1 seconds...[2024-11-03 15:25:08.011804] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:30.324 [2024-11-03 15:25:08.011882] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099161 ] 00:06:30.324 [2024-11-03 15:25:08.093334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.583 [2024-11-03 15:25:08.119907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.583 [2024-11-03 15:25:08.119995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.583 [2024-11-03 15:25:08.120056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.583 [2024-11-03 15:25:08.120054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.520 Running I/O for 1 seconds... 00:06:31.520 lcore 0: 210174 00:06:31.520 lcore 1: 210174 00:06:31.520 lcore 2: 210175 00:06:31.520 lcore 3: 210174 00:06:31.520 done. 00:06:31.520 00:06:31.520 real 0m1.165s 00:06:31.520 user 0m4.068s 00:06:31.520 sys 0m0.094s 00:06:31.520 15:25:09 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.520 15:25:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.520 ************************************ 00:06:31.520 END TEST event_perf 00:06:31.520 ************************************ 00:06:31.520 15:25:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.520 15:25:09 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:31.520 15:25:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.520 15:25:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.520 ************************************ 00:06:31.520 START TEST event_reactor 00:06:31.520 ************************************ 00:06:31.520 15:25:09 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.520 [2024-11-03 15:25:09.260126] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:31.520 [2024-11-03 15:25:09.260208] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099450 ] 00:06:31.779 [2024-11-03 15:25:09.339642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.779 [2024-11-03 15:25:09.361322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.732 test_start 00:06:32.732 oneshot 00:06:32.732 tick 100 00:06:32.732 tick 100 00:06:32.732 tick 250 00:06:32.732 tick 100 00:06:32.732 tick 100 00:06:32.732 tick 250 00:06:32.732 tick 100 00:06:32.732 tick 500 00:06:32.732 tick 100 00:06:32.732 tick 100 00:06:32.732 tick 250 00:06:32.732 tick 100 00:06:32.732 tick 100 00:06:32.732 test_end 00:06:32.732 00:06:32.732 real 0m1.159s 00:06:32.732 user 0m1.068s 00:06:32.732 sys 0m0.087s 00:06:32.732 15:25:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.732 15:25:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.732 ************************************ 00:06:32.732 END TEST event_reactor 00:06:32.732 ************************************ 00:06:32.732 15:25:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.732 15:25:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:32.732 15:25:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.732 15:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.732 ************************************ 00:06:32.732 START TEST event_reactor_perf 00:06:32.732 ************************************ 00:06:32.732 15:25:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.732 [2024-11-03 15:25:10.504468] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:32.732 [2024-11-03 15:25:10.504551] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099730 ] 00:06:33.052 [2024-11-03 15:25:10.586332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.052 [2024-11-03 15:25:10.609164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.002 test_start 00:06:34.002 test_end 00:06:34.002 Performance: 528886 events per second 00:06:34.002 00:06:34.002 real 0m1.157s 00:06:34.002 user 0m1.071s 00:06:34.002 sys 0m0.081s 00:06:34.002 15:25:11 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.002 15:25:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.002 ************************************ 00:06:34.002 END TEST event_reactor_perf 00:06:34.002 ************************************ 00:06:34.002 15:25:11 event -- event/event.sh@49 -- # uname -s 00:06:34.002 15:25:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.002 15:25:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.002 15:25:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.002 15:25:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.002 15:25:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.002 ************************************ 00:06:34.002 START TEST event_scheduler 00:06:34.002 ************************************ 00:06:34.002 15:25:11 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.262 * Looking for test storage... 00:06:34.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.262 15:25:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.262 --rc genhtml_branch_coverage=1 00:06:34.262 --rc genhtml_function_coverage=1 00:06:34.262 --rc genhtml_legend=1 00:06:34.262 --rc geninfo_all_blocks=1 00:06:34.262 --rc geninfo_unexecuted_blocks=1 00:06:34.262 00:06:34.262 ' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.262 --rc genhtml_branch_coverage=1 00:06:34.262 --rc genhtml_function_coverage=1 00:06:34.262 --rc genhtml_legend=1 00:06:34.262 --rc geninfo_all_blocks=1 00:06:34.262 --rc geninfo_unexecuted_blocks=1 00:06:34.262 00:06:34.262 ' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.262 --rc genhtml_branch_coverage=1 00:06:34.262 --rc genhtml_function_coverage=1 00:06:34.262 --rc genhtml_legend=1 00:06:34.262 --rc geninfo_all_blocks=1 00:06:34.262 --rc geninfo_unexecuted_blocks=1 00:06:34.262 00:06:34.262 ' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.262 --rc genhtml_branch_coverage=1 00:06:34.262 --rc genhtml_function_coverage=1 00:06:34.262 --rc genhtml_legend=1 00:06:34.262 --rc geninfo_all_blocks=1 00:06:34.262 --rc geninfo_unexecuted_blocks=1 00:06:34.262 00:06:34.262 ' 00:06:34.262 15:25:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.262 15:25:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2100052 00:06:34.262 15:25:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.262 15:25:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.262 15:25:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2100052 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2100052 ']' 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.262 15:25:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.262 [2024-11-03 15:25:11.960275] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:34.262 [2024-11-03 15:25:11.960326] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100052 ] 00:06:34.262 [2024-11-03 15:25:12.032977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.522 [2024-11-03 15:25:12.059749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.522 [2024-11-03 15:25:12.059835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.522 [2024-11-03 15:25:12.059923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.522 [2024-11-03 15:25:12.059925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:34.522 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.522 [2024-11-03 15:25:12.112514] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.522 [2024-11-03 15:25:12.112535] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.522 [2024-11-03 15:25:12.112546] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.522 [2024-11-03 15:25:12.112555] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.522 [2024-11-03 15:25:12.112563] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.522 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.522 [2024-11-03 15:25:12.181271] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.522 15:25:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.522 15:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.522 ************************************ 00:06:34.522 START TEST scheduler_create_thread 00:06:34.522 ************************************ 00:06:34.522 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:34.522 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.522 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.522 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.522 2 00:06:34.522 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 3 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 4 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 5 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 6 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 7 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 8 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 9 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.523 10 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.523 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.783 15:25:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.161 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.161 15:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.161 15:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.161 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.161 15:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.098 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.098 00:06:37.098 real 0m2.621s 00:06:37.098 user 0m0.023s 00:06:37.098 sys 0m0.008s 00:06:37.098 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.098 15:25:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.098 ************************************ 00:06:37.098 END TEST scheduler_create_thread 00:06:37.098 ************************************ 00:06:37.098 15:25:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.098 15:25:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2100052 00:06:37.098 15:25:14 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2100052 ']' 00:06:37.098 15:25:14 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2100052 00:06:37.098 15:25:14 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2100052 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2100052' 00:06:37.356 killing process with pid 2100052 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2100052 00:06:37.356 15:25:14 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2100052 00:06:37.615 [2024-11-03 15:25:15.319274] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.874 00:06:37.874 real 0m3.759s 00:06:37.874 user 0m5.587s 00:06:37.874 sys 0m0.438s 00:06:37.874 15:25:15 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.874 15:25:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.874 ************************************ 00:06:37.874 END TEST event_scheduler 00:06:37.874 ************************************ 00:06:37.874 15:25:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.874 15:25:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.874 15:25:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.874 15:25:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.874 15:25:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.874 ************************************ 00:06:37.874 START TEST app_repeat 00:06:37.874 ************************************ 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2100649 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2100649' 00:06:37.874 Process app_repeat pid: 2100649 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.874 spdk_app_start Round 0 00:06:37.874 15:25:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2100649 /var/tmp/spdk-nbd.sock 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2100649 ']' 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.874 15:25:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.874 [2024-11-03 15:25:15.592313] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:37.874 [2024-11-03 15:25:15.592375] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100649 ] 00:06:38.133 [2024-11-03 15:25:15.670413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.133 [2024-11-03 15:25:15.692429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.133 [2024-11-03 15:25:15.692432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.133 15:25:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.133 15:25:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:38.133 15:25:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.392 Malloc0 00:06:38.392 15:25:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.392 Malloc1 00:06:38.392 15:25:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.392 15:25:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.392 15:25:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.392 15:25:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.392 15:25:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.392 15:25:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.651 15:25:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.652 /dev/nbd0 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.652 1+0 records in 00:06:38.652 1+0 records out 00:06:38.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229923 s, 17.8 MB/s 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:38.652 15:25:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.652 15:25:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.911 /dev/nbd1 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.911 1+0 records in 00:06:38.911 1+0 records out 00:06:38.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254116 s, 16.1 MB/s 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:38.911 15:25:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.911 15:25:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.170 { 00:06:39.170 "nbd_device": "/dev/nbd0", 00:06:39.170 "bdev_name": "Malloc0" 00:06:39.170 }, 00:06:39.170 { 00:06:39.170 "nbd_device": "/dev/nbd1", 00:06:39.170 "bdev_name": "Malloc1" 00:06:39.170 } 00:06:39.170 ]' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.170 { 00:06:39.170 "nbd_device": "/dev/nbd0", 00:06:39.170 "bdev_name": "Malloc0" 00:06:39.170 }, 00:06:39.170 { 00:06:39.170 "nbd_device": "/dev/nbd1", 00:06:39.170 "bdev_name": "Malloc1" 00:06:39.170 } 00:06:39.170 ]' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.170 /dev/nbd1' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.170 /dev/nbd1' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.170 256+0 records in 00:06:39.170 256+0 records out 00:06:39.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109776 s, 95.5 MB/s 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.170 15:25:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.429 256+0 records in 00:06:39.429 256+0 records out 00:06:39.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019248 s, 54.5 MB/s 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.429 256+0 records in 00:06:39.429 256+0 records out 00:06:39.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018716 s, 56.0 MB/s 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.429 15:25:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.429 15:25:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.689 15:25:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.947 15:25:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.947 15:25:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.206 15:25:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.466 [2024-11-03 15:25:18.035754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.466 [2024-11-03 15:25:18.055576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.466 [2024-11-03 15:25:18.055577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.466 [2024-11-03 15:25:18.095825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.466 [2024-11-03 15:25:18.095868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.755 15:25:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.755 15:25:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.755 spdk_app_start Round 1 00:06:43.755 15:25:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2100649 /var/tmp/spdk-nbd.sock 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2100649 ']' 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.755 15:25:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.755 15:25:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.755 15:25:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:43.755 15:25:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.755 Malloc0 00:06:43.755 15:25:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.755 Malloc1 00:06:43.755 15:25:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.755 15:25:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.014 /dev/nbd0 00:06:44.014 15:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.014 15:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.014 1+0 records in 00:06:44.014 1+0 records out 00:06:44.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226749 s, 18.1 MB/s 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.014 15:25:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:44.014 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.014 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.015 15:25:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.273 /dev/nbd1 00:06:44.273 15:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.273 15:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:44.273 15:25:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.273 1+0 records in 00:06:44.273 1+0 records out 00:06:44.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00840623 s, 487 kB/s 00:06:44.274 15:25:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.274 15:25:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:44.274 15:25:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.274 15:25:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.274 15:25:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:44.274 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.274 15:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.274 15:25:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.274 15:25:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.274 15:25:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.533 { 00:06:44.533 "nbd_device": "/dev/nbd0", 00:06:44.533 "bdev_name": "Malloc0" 00:06:44.533 }, 00:06:44.533 { 00:06:44.533 "nbd_device": "/dev/nbd1", 00:06:44.533 "bdev_name": "Malloc1" 00:06:44.533 } 00:06:44.533 ]' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.533 { 00:06:44.533 "nbd_device": "/dev/nbd0", 00:06:44.533 "bdev_name": "Malloc0" 00:06:44.533 }, 00:06:44.533 { 00:06:44.533 "nbd_device": "/dev/nbd1", 00:06:44.533 "bdev_name": "Malloc1" 00:06:44.533 } 00:06:44.533 ]' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.533 /dev/nbd1' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.533 /dev/nbd1' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.533 256+0 records in 00:06:44.533 256+0 records out 00:06:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106211 s, 98.7 MB/s 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.533 256+0 records in 00:06:44.533 256+0 records out 00:06:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195057 s, 53.8 MB/s 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.533 256+0 records in 00:06:44.533 256+0 records out 00:06:44.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204504 s, 51.3 MB/s 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.533 15:25:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.792 15:25:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.051 15:25:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.310 15:25:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.310 15:25:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.569 15:25:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.569 [2024-11-03 15:25:23.297193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.569 [2024-11-03 15:25:23.316472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.569 [2024-11-03 15:25:23.316473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.569 [2024-11-03 15:25:23.358195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.569 [2024-11-03 15:25:23.358237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.858 15:25:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.858 15:25:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.858 spdk_app_start Round 2 00:06:48.858 15:25:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2100649 /var/tmp/spdk-nbd.sock 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2100649 ']' 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.858 15:25:26 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:48.858 15:25:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.858 Malloc0 00:06:48.858 15:25:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.117 Malloc1 00:06:49.117 15:25:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.117 15:25:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.376 /dev/nbd0 00:06:49.376 15:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.376 15:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.376 1+0 records in 00:06:49.376 1+0 records out 00:06:49.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025755 s, 15.9 MB/s 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.376 15:25:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:49.376 15:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.376 15:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.376 15:25:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.635 /dev/nbd1 00:06:49.635 15:25:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.635 15:25:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.635 1+0 records in 00:06:49.635 1+0 records out 00:06:49.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280267 s, 14.6 MB/s 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.635 15:25:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:49.635 15:25:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.635 15:25:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.635 15:25:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.636 15:25:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.636 15:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.636 15:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.636 { 00:06:49.636 "nbd_device": "/dev/nbd0", 00:06:49.636 "bdev_name": "Malloc0" 00:06:49.636 }, 00:06:49.636 { 00:06:49.636 "nbd_device": "/dev/nbd1", 00:06:49.636 "bdev_name": "Malloc1" 00:06:49.636 } 00:06:49.636 ]' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.895 { 00:06:49.895 "nbd_device": "/dev/nbd0", 00:06:49.895 "bdev_name": "Malloc0" 00:06:49.895 }, 00:06:49.895 { 00:06:49.895 "nbd_device": "/dev/nbd1", 00:06:49.895 "bdev_name": "Malloc1" 00:06:49.895 } 00:06:49.895 ]' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.895 /dev/nbd1' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.895 /dev/nbd1' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.895 256+0 records in 00:06:49.895 256+0 records out 00:06:49.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109074 s, 96.1 MB/s 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.895 256+0 records in 00:06:49.895 256+0 records out 00:06:49.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191477 s, 54.8 MB/s 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.895 256+0 records in 00:06:49.895 256+0 records out 00:06:49.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204871 s, 51.2 MB/s 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.895 15:25:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.155 15:25:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.414 15:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.414 15:25:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.414 15:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.414 15:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.414 15:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.673 15:25:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.673 15:25:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.673 15:25:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.932 [2024-11-03 15:25:28.569188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.932 [2024-11-03 15:25:28.588693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.932 [2024-11-03 15:25:28.588694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.932 [2024-11-03 15:25:28.630220] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.932 [2024-11-03 15:25:28.630266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.223 15:25:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2100649 /var/tmp/spdk-nbd.sock 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2100649 ']' 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:54.223 15:25:31 event.app_repeat -- event/event.sh@39 -- # killprocess 2100649 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2100649 ']' 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2100649 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2100649 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2100649' 00:06:54.223 killing process with pid 2100649 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2100649 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2100649 00:06:54.223 spdk_app_start is called in Round 0. 00:06:54.223 Shutdown signal received, stop current app iteration 00:06:54.223 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:06:54.223 spdk_app_start is called in Round 1. 00:06:54.223 Shutdown signal received, stop current app iteration 00:06:54.223 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:06:54.223 spdk_app_start is called in Round 2. 00:06:54.223 Shutdown signal received, stop current app iteration 00:06:54.223 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:06:54.223 spdk_app_start is called in Round 3. 00:06:54.223 Shutdown signal received, stop current app iteration 00:06:54.223 15:25:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.223 15:25:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.223 00:06:54.223 real 0m16.253s 00:06:54.223 user 0m35.196s 00:06:54.223 sys 0m3.064s 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.223 15:25:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 ************************************ 00:06:54.223 END TEST app_repeat 00:06:54.223 ************************************ 00:06:54.223 15:25:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.223 15:25:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.223 15:25:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.223 15:25:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.223 15:25:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 ************************************ 00:06:54.223 START TEST cpu_locks 00:06:54.223 ************************************ 00:06:54.223 15:25:31 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.223 * Looking for test storage... 00:06:54.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:54.224 15:25:31 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.224 15:25:31 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.224 15:25:31 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.483 15:25:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.483 15:25:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.484 --rc genhtml_branch_coverage=1 00:06:54.484 --rc genhtml_function_coverage=1 00:06:54.484 --rc genhtml_legend=1 00:06:54.484 --rc geninfo_all_blocks=1 00:06:54.484 --rc geninfo_unexecuted_blocks=1 00:06:54.484 00:06:54.484 ' 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.484 --rc genhtml_branch_coverage=1 00:06:54.484 --rc genhtml_function_coverage=1 00:06:54.484 --rc genhtml_legend=1 00:06:54.484 --rc geninfo_all_blocks=1 00:06:54.484 --rc geninfo_unexecuted_blocks=1 00:06:54.484 00:06:54.484 ' 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.484 --rc genhtml_branch_coverage=1 00:06:54.484 --rc genhtml_function_coverage=1 00:06:54.484 --rc genhtml_legend=1 00:06:54.484 --rc geninfo_all_blocks=1 00:06:54.484 --rc geninfo_unexecuted_blocks=1 00:06:54.484 00:06:54.484 ' 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.484 --rc genhtml_branch_coverage=1 00:06:54.484 --rc genhtml_function_coverage=1 00:06:54.484 --rc genhtml_legend=1 00:06:54.484 --rc geninfo_all_blocks=1 00:06:54.484 --rc geninfo_unexecuted_blocks=1 00:06:54.484 00:06:54.484 ' 00:06:54.484 15:25:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.484 15:25:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.484 15:25:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.484 15:25:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.484 15:25:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.484 ************************************ 00:06:54.484 START TEST default_locks 00:06:54.484 ************************************ 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2103808 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2103808 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2103808 ']' 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.484 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.484 [2024-11-03 15:25:32.173910] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:54.484 [2024-11-03 15:25:32.173956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103808 ] 00:06:54.484 [2024-11-03 15:25:32.250290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.484 [2024-11-03 15:25:32.272295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.743 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.743 15:25:32 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:54.743 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2103808 00:06:54.743 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2103808 00:06:54.743 15:25:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.311 lslocks: write error 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2103808 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2103808 ']' 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2103808 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.311 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2103808 00:06:55.571 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.571 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.571 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2103808' 00:06:55.571 killing process with pid 2103808 00:06:55.571 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2103808 00:06:55.571 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2103808 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2103808 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2103808 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2103808 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2103808 ']' 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2103808) - No such process 00:06:55.830 ERROR: process (pid: 2103808) is no longer running 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.830 00:06:55.830 real 0m1.311s 00:06:55.830 user 0m1.281s 00:06:55.830 sys 0m0.670s 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.830 15:25:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 END TEST default_locks 00:06:55.830 ************************************ 00:06:55.830 15:25:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.830 15:25:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.830 15:25:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.830 15:25:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 START TEST default_locks_via_rpc 00:06:55.830 ************************************ 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2104102 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2104102 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2104102 ']' 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.830 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 [2024-11-03 15:25:33.541729] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:55.830 [2024-11-03 15:25:33.541772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104102 ] 00:06:55.830 [2024-11-03 15:25:33.619081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.090 [2024-11-03 15:25:33.641736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2104102 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2104102 00:06:56.090 15:25:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2104102 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2104102 ']' 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2104102 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2104102 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2104102' 00:06:56.658 killing process with pid 2104102 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2104102 00:06:56.658 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2104102 00:06:56.918 00:06:56.918 real 0m1.012s 00:06:56.918 user 0m0.970s 00:06:56.918 sys 0m0.486s 00:06:56.918 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.918 15:25:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.918 ************************************ 00:06:56.918 END TEST default_locks_via_rpc 00:06:56.918 ************************************ 00:06:56.918 15:25:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.918 15:25:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.918 15:25:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.918 15:25:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.918 ************************************ 00:06:56.918 START TEST non_locking_app_on_locked_coremask 00:06:56.918 ************************************ 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2104272 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2104272 /var/tmp/spdk.sock 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2104272 ']' 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.918 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.918 [2024-11-03 15:25:34.651537] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:56.918 [2024-11-03 15:25:34.651585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104272 ] 00:06:57.177 [2024-11-03 15:25:34.728242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.178 [2024-11-03 15:25:34.751054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2104402 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2104402 /var/tmp/spdk2.sock 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2104402 ']' 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.178 15:25:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.438 [2024-11-03 15:25:34.995347] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:57.438 [2024-11-03 15:25:34.995401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104402 ] 00:06:57.438 [2024-11-03 15:25:35.106681] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.438 [2024-11-03 15:25:35.106708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.438 [2024-11-03 15:25:35.153404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.375 15:25:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.375 15:25:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:58.375 15:25:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2104272 00:06:58.375 15:25:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2104272 00:06:58.375 15:25:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.943 lslocks: write error 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2104272 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2104272 ']' 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2104272 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2104272 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2104272' 00:06:58.943 killing process with pid 2104272 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2104272 00:06:58.943 15:25:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2104272 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2104402 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2104402 ']' 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2104402 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2104402 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2104402' 00:06:59.512 killing process with pid 2104402 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2104402 00:06:59.512 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2104402 00:06:59.772 00:06:59.772 real 0m2.893s 00:06:59.772 user 0m3.049s 00:06:59.772 sys 0m1.072s 00:06:59.772 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.772 15:25:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.772 ************************************ 00:06:59.772 END TEST non_locking_app_on_locked_coremask 00:06:59.772 ************************************ 00:06:59.772 15:25:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.772 15:25:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.772 15:25:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.772 15:25:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.032 ************************************ 00:07:00.032 START TEST locking_app_on_unlocked_coremask 00:07:00.032 ************************************ 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2104804 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2104804 /var/tmp/spdk.sock 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2104804 ']' 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.032 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.032 [2024-11-03 15:25:37.625693] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:00.032 [2024-11-03 15:25:37.625741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104804 ] 00:07:00.032 [2024-11-03 15:25:37.702674] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.032 [2024-11-03 15:25:37.702700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.032 [2024-11-03 15:25:37.725120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2104966 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2104966 /var/tmp/spdk2.sock 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2104966 ']' 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.292 15:25:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.292 [2024-11-03 15:25:37.969329] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:00.292 [2024-11-03 15:25:37.969380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104966 ] 00:07:00.292 [2024-11-03 15:25:38.078202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.551 [2024-11-03 15:25:38.125119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.119 15:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.119 15:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:01.119 15:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2104966 00:07:01.119 15:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2104966 00:07:01.119 15:25:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.056 lslocks: write error 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2104804 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2104804 ']' 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2104804 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2104804 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2104804' 00:07:02.056 killing process with pid 2104804 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2104804 00:07:02.056 15:25:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2104804 00:07:02.624 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2104966 00:07:02.624 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2104966 ']' 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2104966 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2104966 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2104966' 00:07:02.625 killing process with pid 2104966 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2104966 00:07:02.625 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2104966 00:07:03.193 00:07:03.193 real 0m3.102s 00:07:03.193 user 0m3.286s 00:07:03.193 sys 0m1.143s 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 ************************************ 00:07:03.193 END TEST locking_app_on_unlocked_coremask 00:07:03.193 ************************************ 00:07:03.193 15:25:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.193 15:25:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.193 15:25:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.193 15:25:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 ************************************ 00:07:03.193 START TEST locking_app_on_locked_coremask 00:07:03.193 ************************************ 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2105422 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2105422 /var/tmp/spdk.sock 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2105422 ']' 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.193 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.194 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.194 15:25:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.194 [2024-11-03 15:25:40.806208] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:03.194 [2024-11-03 15:25:40.806259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105422 ] 00:07:03.194 [2024-11-03 15:25:40.884860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.194 [2024-11-03 15:25:40.907188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2105541 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2105541 /var/tmp/spdk2.sock 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2105541 /var/tmp/spdk2.sock 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2105541 /var/tmp/spdk2.sock 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2105541 ']' 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.453 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.453 [2024-11-03 15:25:41.154223] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:03.453 [2024-11-03 15:25:41.154276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105541 ] 00:07:03.712 [2024-11-03 15:25:41.262134] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2105422 has claimed it. 00:07:03.712 [2024-11-03 15:25:41.262174] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.281 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2105541) - No such process 00:07:04.281 ERROR: process (pid: 2105541) is no longer running 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2105422 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2105422 00:07:04.281 15:25:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.541 lslocks: write error 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2105422 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2105422 ']' 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2105422 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2105422 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2105422' 00:07:04.541 killing process with pid 2105422 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2105422 00:07:04.541 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2105422 00:07:04.801 00:07:04.801 real 0m1.725s 00:07:04.801 user 0m1.846s 00:07:04.801 sys 0m0.643s 00:07:04.801 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.801 15:25:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.801 ************************************ 00:07:04.801 END TEST locking_app_on_locked_coremask 00:07:04.801 ************************************ 00:07:04.801 15:25:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.801 15:25:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.801 15:25:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.801 15:25:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.801 ************************************ 00:07:04.801 START TEST locking_overlapped_coremask 00:07:04.801 ************************************ 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2105837 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2105837 /var/tmp/spdk.sock 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2105837 ']' 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.801 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.060 [2024-11-03 15:25:42.617115] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:05.060 [2024-11-03 15:25:42.617163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105837 ] 00:07:05.060 [2024-11-03 15:25:42.696049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.060 [2024-11-03 15:25:42.721399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.060 [2024-11-03 15:25:42.721495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.060 [2024-11-03 15:25:42.721495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2105847 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2105847 /var/tmp/spdk2.sock 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2105847 /var/tmp/spdk2.sock 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2105847 /var/tmp/spdk2.sock 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2105847 ']' 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.320 15:25:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.320 [2024-11-03 15:25:42.968986] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:05.320 [2024-11-03 15:25:42.969040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105847 ] 00:07:05.320 [2024-11-03 15:25:43.079416] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2105837 has claimed it. 00:07:05.320 [2024-11-03 15:25:43.079469] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.904 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2105847) - No such process 00:07:05.904 ERROR: process (pid: 2105847) is no longer running 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2105837 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2105837 ']' 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2105837 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2105837 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2105837' 00:07:05.904 killing process with pid 2105837 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2105837 00:07:05.904 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2105837 00:07:06.216 00:07:06.216 real 0m1.403s 00:07:06.216 user 0m3.855s 00:07:06.216 sys 0m0.452s 00:07:06.216 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.216 15:25:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 ************************************ 00:07:06.216 END TEST locking_overlapped_coremask 00:07:06.216 ************************************ 00:07:06.529 15:25:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:06.529 15:25:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.529 15:25:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.529 15:25:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.529 ************************************ 00:07:06.529 START TEST locking_overlapped_coremask_via_rpc 00:07:06.529 ************************************ 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2106126 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2106126 /var/tmp/spdk.sock 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2106126 ']' 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.529 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.529 [2024-11-03 15:25:44.104578] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:06.529 [2024-11-03 15:25:44.104626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106126 ] 00:07:06.529 [2024-11-03 15:25:44.181132] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.529 [2024-11-03 15:25:44.181158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.529 [2024-11-03 15:25:44.205115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.529 [2024-11-03 15:25:44.205210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.529 [2024-11-03 15:25:44.205212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2106158 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2106158 /var/tmp/spdk2.sock 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2106158 ']' 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.789 15:25:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.789 [2024-11-03 15:25:44.452504] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:06.789 [2024-11-03 15:25:44.452558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106158 ] 00:07:06.789 [2024-11-03 15:25:44.563554] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.789 [2024-11-03 15:25:44.563591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.060 [2024-11-03 15:25:44.616413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.060 [2024-11-03 15:25:44.616531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.060 [2024-11-03 15:25:44.616533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.631 [2024-11-03 15:25:45.294044] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2106126 has claimed it. 00:07:07.631 request: 00:07:07.631 { 00:07:07.631 "method": "framework_enable_cpumask_locks", 00:07:07.631 "req_id": 1 00:07:07.631 } 00:07:07.631 Got JSON-RPC error response 00:07:07.631 response: 00:07:07.631 { 00:07:07.631 "code": -32603, 00:07:07.631 "message": "Failed to claim CPU core: 2" 00:07:07.631 } 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.631 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2106126 /var/tmp/spdk.sock 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2106126 ']' 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.632 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2106158 /var/tmp/spdk2.sock 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2106158 ']' 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.891 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.150 00:07:08.150 real 0m1.668s 00:07:08.150 user 0m0.794s 00:07:08.150 sys 0m0.159s 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.150 15:25:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.150 ************************************ 00:07:08.150 END TEST locking_overlapped_coremask_via_rpc 00:07:08.150 ************************************ 00:07:08.150 15:25:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.150 15:25:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2106126 ]] 00:07:08.150 15:25:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2106126 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2106126 ']' 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2106126 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2106126 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.150 15:25:45 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2106126' 00:07:08.151 killing process with pid 2106126 00:07:08.151 15:25:45 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2106126 00:07:08.151 15:25:45 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2106126 00:07:08.410 15:25:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2106158 ]] 00:07:08.410 15:25:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2106158 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2106158 ']' 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2106158 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2106158 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:08.410 15:25:46 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2106158' 00:07:08.410 killing process with pid 2106158 00:07:08.669 15:25:46 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2106158 00:07:08.669 15:25:46 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2106158 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2106126 ]] 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2106126 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2106126 ']' 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2106126 00:07:08.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2106126) - No such process 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2106126 is not found' 00:07:08.929 Process with pid 2106126 is not found 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2106158 ]] 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2106158 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2106158 ']' 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2106158 00:07:08.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2106158) - No such process 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2106158 is not found' 00:07:08.929 Process with pid 2106158 is not found 00:07:08.929 15:25:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.929 00:07:08.929 real 0m14.618s 00:07:08.929 user 0m24.880s 00:07:08.929 sys 0m5.735s 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.929 15:25:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.929 ************************************ 00:07:08.929 END TEST cpu_locks 00:07:08.929 ************************************ 00:07:08.929 00:07:08.929 real 0m38.797s 00:07:08.929 user 1m12.152s 00:07:08.929 sys 0m9.957s 00:07:08.929 15:25:46 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.929 15:25:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.929 ************************************ 00:07:08.929 END TEST event 00:07:08.929 ************************************ 00:07:08.929 15:25:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:08.929 15:25:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:08.929 15:25:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.929 15:25:46 -- common/autotest_common.sh@10 -- # set +x 00:07:08.929 ************************************ 00:07:08.929 START TEST thread 00:07:08.929 ************************************ 00:07:08.929 15:25:46 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:09.189 * Looking for test storage... 00:07:09.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.189 15:25:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.189 15:25:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.189 15:25:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.189 15:25:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.189 15:25:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.189 15:25:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.189 15:25:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.189 15:25:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.189 15:25:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.189 15:25:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.189 15:25:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.189 15:25:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:09.189 15:25:46 thread -- scripts/common.sh@345 -- # : 1 00:07:09.189 15:25:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.189 15:25:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.189 15:25:46 thread -- scripts/common.sh@365 -- # decimal 1 00:07:09.189 15:25:46 thread -- scripts/common.sh@353 -- # local d=1 00:07:09.189 15:25:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.189 15:25:46 thread -- scripts/common.sh@355 -- # echo 1 00:07:09.189 15:25:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.189 15:25:46 thread -- scripts/common.sh@366 -- # decimal 2 00:07:09.189 15:25:46 thread -- scripts/common.sh@353 -- # local d=2 00:07:09.189 15:25:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.189 15:25:46 thread -- scripts/common.sh@355 -- # echo 2 00:07:09.189 15:25:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.189 15:25:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.189 15:25:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.189 15:25:46 thread -- scripts/common.sh@368 -- # return 0 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.189 15:25:46 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.190 --rc genhtml_branch_coverage=1 00:07:09.190 --rc genhtml_function_coverage=1 00:07:09.190 --rc genhtml_legend=1 00:07:09.190 --rc geninfo_all_blocks=1 00:07:09.190 --rc geninfo_unexecuted_blocks=1 00:07:09.190 00:07:09.190 ' 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.190 --rc genhtml_branch_coverage=1 00:07:09.190 --rc genhtml_function_coverage=1 00:07:09.190 --rc genhtml_legend=1 00:07:09.190 --rc geninfo_all_blocks=1 00:07:09.190 --rc geninfo_unexecuted_blocks=1 00:07:09.190 00:07:09.190 ' 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.190 --rc genhtml_branch_coverage=1 00:07:09.190 --rc genhtml_function_coverage=1 00:07:09.190 --rc genhtml_legend=1 00:07:09.190 --rc geninfo_all_blocks=1 00:07:09.190 --rc geninfo_unexecuted_blocks=1 00:07:09.190 00:07:09.190 ' 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.190 --rc genhtml_branch_coverage=1 00:07:09.190 --rc genhtml_function_coverage=1 00:07:09.190 --rc genhtml_legend=1 00:07:09.190 --rc geninfo_all_blocks=1 00:07:09.190 --rc geninfo_unexecuted_blocks=1 00:07:09.190 00:07:09.190 ' 00:07:09.190 15:25:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.190 15:25:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.190 ************************************ 00:07:09.190 START TEST thread_poller_perf 00:07:09.190 ************************************ 00:07:09.190 15:25:46 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.190 [2024-11-03 15:25:46.896168] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:09.190 [2024-11-03 15:25:46.896249] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106638 ] 00:07:09.190 [2024-11-03 15:25:46.974947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.449 [2024-11-03 15:25:46.997151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.449 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:10.387 [2024-11-03T14:25:48.177Z] ====================================== 00:07:10.387 [2024-11-03T14:25:48.177Z] busy:2509408954 (cyc) 00:07:10.387 [2024-11-03T14:25:48.177Z] total_run_count: 434000 00:07:10.387 [2024-11-03T14:25:48.177Z] tsc_hz: 2500000000 (cyc) 00:07:10.387 [2024-11-03T14:25:48.177Z] ====================================== 00:07:10.387 [2024-11-03T14:25:48.177Z] poller_cost: 5782 (cyc), 2312 (nsec) 00:07:10.387 00:07:10.387 real 0m1.158s 00:07:10.387 user 0m1.073s 00:07:10.387 sys 0m0.082s 00:07:10.387 15:25:48 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.387 15:25:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.387 ************************************ 00:07:10.387 END TEST thread_poller_perf 00:07:10.387 ************************************ 00:07:10.387 15:25:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.387 15:25:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:10.387 15:25:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.387 15:25:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.387 ************************************ 00:07:10.387 START TEST thread_poller_perf 00:07:10.387 ************************************ 00:07:10.387 15:25:48 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.387 [2024-11-03 15:25:48.136790] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:10.387 [2024-11-03 15:25:48.136872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106824 ] 00:07:10.646 [2024-11-03 15:25:48.217234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.646 [2024-11-03 15:25:48.238608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.646 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:11.584 [2024-11-03T14:25:49.374Z] ====================================== 00:07:11.584 [2024-11-03T14:25:49.374Z] busy:2501824418 (cyc) 00:07:11.584 [2024-11-03T14:25:49.374Z] total_run_count: 5602000 00:07:11.584 [2024-11-03T14:25:49.374Z] tsc_hz: 2500000000 (cyc) 00:07:11.584 [2024-11-03T14:25:49.374Z] ====================================== 00:07:11.584 [2024-11-03T14:25:49.374Z] poller_cost: 446 (cyc), 178 (nsec) 00:07:11.584 00:07:11.584 real 0m1.159s 00:07:11.584 user 0m1.080s 00:07:11.584 sys 0m0.076s 00:07:11.584 15:25:49 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.584 15:25:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.584 ************************************ 00:07:11.584 END TEST thread_poller_perf 00:07:11.584 ************************************ 00:07:11.584 15:25:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.584 00:07:11.584 real 0m2.679s 00:07:11.584 user 0m2.326s 00:07:11.584 sys 0m0.372s 00:07:11.584 15:25:49 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.584 15:25:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.584 ************************************ 00:07:11.584 END TEST thread 00:07:11.584 ************************************ 00:07:11.584 15:25:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:11.584 15:25:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.584 15:25:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:11.584 15:25:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.584 15:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.844 ************************************ 00:07:11.844 START TEST app_cmdline 00:07:11.844 ************************************ 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.844 * Looking for test storage... 00:07:11.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.844 15:25:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.844 --rc genhtml_branch_coverage=1 00:07:11.844 --rc genhtml_function_coverage=1 00:07:11.844 --rc genhtml_legend=1 00:07:11.844 --rc geninfo_all_blocks=1 00:07:11.844 --rc geninfo_unexecuted_blocks=1 00:07:11.844 00:07:11.844 ' 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.844 --rc genhtml_branch_coverage=1 00:07:11.844 --rc genhtml_function_coverage=1 00:07:11.844 --rc genhtml_legend=1 00:07:11.844 --rc geninfo_all_blocks=1 00:07:11.844 --rc geninfo_unexecuted_blocks=1 00:07:11.844 00:07:11.844 ' 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.844 --rc genhtml_branch_coverage=1 00:07:11.844 --rc genhtml_function_coverage=1 00:07:11.844 --rc genhtml_legend=1 00:07:11.844 --rc geninfo_all_blocks=1 00:07:11.844 --rc geninfo_unexecuted_blocks=1 00:07:11.844 00:07:11.844 ' 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.844 --rc genhtml_branch_coverage=1 00:07:11.844 --rc genhtml_function_coverage=1 00:07:11.844 --rc genhtml_legend=1 00:07:11.844 --rc geninfo_all_blocks=1 00:07:11.844 --rc geninfo_unexecuted_blocks=1 00:07:11.844 00:07:11.844 ' 00:07:11.844 15:25:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.844 15:25:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2107161 00:07:11.844 15:25:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.844 15:25:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2107161 00:07:11.844 15:25:49 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2107161 ']' 00:07:11.845 15:25:49 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.845 15:25:49 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.845 15:25:49 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.845 15:25:49 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.845 15:25:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.104 [2024-11-03 15:25:49.653156] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:12.104 [2024-11-03 15:25:49.653209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107161 ] 00:07:12.104 [2024-11-03 15:25:49.727472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.104 [2024-11-03 15:25:49.749962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.362 15:25:49 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.363 15:25:49 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:12.363 15:25:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.363 { 00:07:12.363 "version": "SPDK v25.01-pre git sha1 fa3ab7384", 00:07:12.363 "fields": { 00:07:12.363 "major": 25, 00:07:12.363 "minor": 1, 00:07:12.363 "patch": 0, 00:07:12.363 "suffix": "-pre", 00:07:12.363 "commit": "fa3ab7384" 00:07:12.363 } 00:07:12.363 } 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.363 15:25:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.363 15:25:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.363 15:25:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.621 15:25:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.621 15:25:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.621 15:25:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.621 request: 00:07:12.621 { 00:07:12.621 "method": "env_dpdk_get_mem_stats", 00:07:12.621 "req_id": 1 00:07:12.621 } 00:07:12.621 Got JSON-RPC error response 00:07:12.621 response: 00:07:12.621 { 00:07:12.621 "code": -32601, 00:07:12.621 "message": "Method not found" 00:07:12.621 } 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.621 15:25:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2107161 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2107161 ']' 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2107161 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.621 15:25:50 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2107161 00:07:12.880 15:25:50 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.880 15:25:50 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.880 15:25:50 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2107161' 00:07:12.880 killing process with pid 2107161 00:07:12.880 15:25:50 app_cmdline -- common/autotest_common.sh@971 -- # kill 2107161 00:07:12.880 15:25:50 app_cmdline -- common/autotest_common.sh@976 -- # wait 2107161 00:07:13.139 00:07:13.139 real 0m1.333s 00:07:13.139 user 0m1.504s 00:07:13.139 sys 0m0.508s 00:07:13.139 15:25:50 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.139 15:25:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.139 ************************************ 00:07:13.139 END TEST app_cmdline 00:07:13.139 ************************************ 00:07:13.139 15:25:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:13.139 15:25:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.139 15:25:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.139 15:25:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.139 ************************************ 00:07:13.139 START TEST version 00:07:13.139 ************************************ 00:07:13.139 15:25:50 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:13.139 * Looking for test storage... 00:07:13.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:13.139 15:25:50 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.139 15:25:50 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.139 15:25:50 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.399 15:25:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.399 15:25:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.399 15:25:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.399 15:25:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.399 15:25:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.399 15:25:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.399 15:25:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.399 15:25:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.399 15:25:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.399 15:25:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.399 15:25:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.399 15:25:50 version -- scripts/common.sh@344 -- # case "$op" in 00:07:13.399 15:25:50 version -- scripts/common.sh@345 -- # : 1 00:07:13.399 15:25:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.399 15:25:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.399 15:25:50 version -- scripts/common.sh@365 -- # decimal 1 00:07:13.399 15:25:50 version -- scripts/common.sh@353 -- # local d=1 00:07:13.399 15:25:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.399 15:25:50 version -- scripts/common.sh@355 -- # echo 1 00:07:13.399 15:25:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.399 15:25:50 version -- scripts/common.sh@366 -- # decimal 2 00:07:13.399 15:25:50 version -- scripts/common.sh@353 -- # local d=2 00:07:13.399 15:25:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.399 15:25:50 version -- scripts/common.sh@355 -- # echo 2 00:07:13.399 15:25:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.399 15:25:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.399 15:25:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.399 15:25:50 version -- scripts/common.sh@368 -- # return 0 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.399 --rc genhtml_branch_coverage=1 00:07:13.399 --rc genhtml_function_coverage=1 00:07:13.399 --rc genhtml_legend=1 00:07:13.399 --rc geninfo_all_blocks=1 00:07:13.399 --rc geninfo_unexecuted_blocks=1 00:07:13.399 00:07:13.399 ' 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.399 --rc genhtml_branch_coverage=1 00:07:13.399 --rc genhtml_function_coverage=1 00:07:13.399 --rc genhtml_legend=1 00:07:13.399 --rc geninfo_all_blocks=1 00:07:13.399 --rc geninfo_unexecuted_blocks=1 00:07:13.399 00:07:13.399 ' 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.399 --rc genhtml_branch_coverage=1 00:07:13.399 --rc genhtml_function_coverage=1 00:07:13.399 --rc genhtml_legend=1 00:07:13.399 --rc geninfo_all_blocks=1 00:07:13.399 --rc geninfo_unexecuted_blocks=1 00:07:13.399 00:07:13.399 ' 00:07:13.399 15:25:50 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.399 --rc genhtml_branch_coverage=1 00:07:13.399 --rc genhtml_function_coverage=1 00:07:13.399 --rc genhtml_legend=1 00:07:13.399 --rc geninfo_all_blocks=1 00:07:13.399 --rc geninfo_unexecuted_blocks=1 00:07:13.399 00:07:13.399 ' 00:07:13.399 15:25:50 version -- app/version.sh@17 -- # get_header_version major 00:07:13.399 15:25:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:13.399 15:25:50 version -- app/version.sh@14 -- # cut -f2 00:07:13.399 15:25:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.399 15:25:51 version -- app/version.sh@17 -- # major=25 00:07:13.399 15:25:51 version -- app/version.sh@18 -- # get_header_version minor 00:07:13.399 15:25:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # cut -f2 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.399 15:25:51 version -- app/version.sh@18 -- # minor=1 00:07:13.399 15:25:51 version -- app/version.sh@19 -- # get_header_version patch 00:07:13.399 15:25:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # cut -f2 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.399 15:25:51 version -- app/version.sh@19 -- # patch=0 00:07:13.399 15:25:51 version -- app/version.sh@20 -- # get_header_version suffix 00:07:13.399 15:25:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # cut -f2 00:07:13.399 15:25:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.399 15:25:51 version -- app/version.sh@20 -- # suffix=-pre 00:07:13.399 15:25:51 version -- app/version.sh@22 -- # version=25.1 00:07:13.399 15:25:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.399 15:25:51 version -- app/version.sh@28 -- # version=25.1rc0 00:07:13.400 15:25:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:13.400 15:25:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.400 15:25:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:13.400 15:25:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:13.400 00:07:13.400 real 0m0.268s 00:07:13.400 user 0m0.166s 00:07:13.400 sys 0m0.154s 00:07:13.400 15:25:51 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.400 15:25:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:13.400 ************************************ 00:07:13.400 END TEST version 00:07:13.400 ************************************ 00:07:13.400 15:25:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:13.400 15:25:51 -- spdk/autotest.sh@194 -- # uname -s 00:07:13.400 15:25:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:13.400 15:25:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.400 15:25:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.400 15:25:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:13.400 15:25:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.400 15:25:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.400 15:25:51 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:13.400 15:25:51 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:07:13.400 15:25:51 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:13.400 15:25:51 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:13.400 15:25:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.400 15:25:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.659 ************************************ 00:07:13.659 START TEST nvmf_rdma 00:07:13.659 ************************************ 00:07:13.659 15:25:51 nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:13.659 * Looking for test storage... 00:07:13.659 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:13.659 15:25:51 nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.659 15:25:51 nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.659 15:25:51 nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.659 15:25:51 nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.659 15:25:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.660 15:25:51 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 15:25:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.660 15:25:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.660 15:25:51 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.660 15:25:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:13.920 ************************************ 00:07:13.920 START TEST nvmf_target_core 00:07:13.920 ************************************ 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:13.920 * Looking for test storage... 00:07:13.920 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.920 --rc genhtml_branch_coverage=1 00:07:13.920 --rc genhtml_function_coverage=1 00:07:13.920 --rc genhtml_legend=1 00:07:13.920 --rc geninfo_all_blocks=1 00:07:13.920 --rc geninfo_unexecuted_blocks=1 00:07:13.920 00:07:13.920 ' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.920 --rc genhtml_branch_coverage=1 00:07:13.920 --rc genhtml_function_coverage=1 00:07:13.920 --rc genhtml_legend=1 00:07:13.920 --rc geninfo_all_blocks=1 00:07:13.920 --rc geninfo_unexecuted_blocks=1 00:07:13.920 00:07:13.920 ' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.920 --rc genhtml_branch_coverage=1 00:07:13.920 --rc genhtml_function_coverage=1 00:07:13.920 --rc genhtml_legend=1 00:07:13.920 --rc geninfo_all_blocks=1 00:07:13.920 --rc geninfo_unexecuted_blocks=1 00:07:13.920 00:07:13.920 ' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.920 --rc genhtml_branch_coverage=1 00:07:13.920 --rc genhtml_function_coverage=1 00:07:13.920 --rc genhtml_legend=1 00:07:13.920 --rc geninfo_all_blocks=1 00:07:13.920 --rc geninfo_unexecuted_blocks=1 00:07:13.920 00:07:13.920 ' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.920 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.921 ************************************ 00:07:13.921 START TEST nvmf_abort 00:07:13.921 ************************************ 00:07:13.921 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:14.181 * Looking for test storage... 00:07:14.181 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.181 --rc genhtml_branch_coverage=1 00:07:14.181 --rc genhtml_function_coverage=1 00:07:14.181 --rc genhtml_legend=1 00:07:14.181 --rc geninfo_all_blocks=1 00:07:14.181 --rc geninfo_unexecuted_blocks=1 00:07:14.181 00:07:14.181 ' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.181 --rc genhtml_branch_coverage=1 00:07:14.181 --rc genhtml_function_coverage=1 00:07:14.181 --rc genhtml_legend=1 00:07:14.181 --rc geninfo_all_blocks=1 00:07:14.181 --rc geninfo_unexecuted_blocks=1 00:07:14.181 00:07:14.181 ' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.181 --rc genhtml_branch_coverage=1 00:07:14.181 --rc genhtml_function_coverage=1 00:07:14.181 --rc genhtml_legend=1 00:07:14.181 --rc geninfo_all_blocks=1 00:07:14.181 --rc geninfo_unexecuted_blocks=1 00:07:14.181 00:07:14.181 ' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.181 --rc genhtml_branch_coverage=1 00:07:14.181 --rc genhtml_function_coverage=1 00:07:14.181 --rc genhtml_legend=1 00:07:14.181 --rc geninfo_all_blocks=1 00:07:14.181 --rc geninfo_unexecuted_blocks=1 00:07:14.181 00:07:14.181 ' 00:07:14.181 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.182 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.182 15:25:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:22.309 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:22.309 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:22.309 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:22.309 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:22.310 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:22.310 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.310 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:22.310 altname enp217s0f0np0 00:07:22.310 altname ens818f0np0 00:07:22.310 inet 192.168.100.8/24 scope global mlx_0_0 00:07:22.310 valid_lft forever preferred_lft forever 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:22.310 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.310 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:22.310 altname enp217s0f1np1 00:07:22.310 altname ens818f1np1 00:07:22.310 inet 192.168.100.9/24 scope global mlx_0_1 00:07:22.310 valid_lft forever preferred_lft forever 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:22.310 15:25:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:22.310 192.168.100.9' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:22.310 192.168.100.9' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:22.310 192.168.100.9' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:22.310 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2111094 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2111094 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2111094 ']' 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 [2024-11-03 15:25:59.107299] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:22.311 [2024-11-03 15:25:59.107350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.311 [2024-11-03 15:25:59.184765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.311 [2024-11-03 15:25:59.208488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.311 [2024-11-03 15:25:59.208529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.311 [2024-11-03 15:25:59.208538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.311 [2024-11-03 15:25:59.208546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.311 [2024-11-03 15:25:59.208553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.311 [2024-11-03 15:25:59.210139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.311 [2024-11-03 15:25:59.210228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.311 [2024-11-03 15:25:59.210231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 [2024-11-03 15:25:59.381688] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18273d0/0x182b880) succeed. 00:07:22.311 [2024-11-03 15:25:59.399190] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1828970/0x186cf20) succeed. 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 Malloc0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 Delay0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 [2024-11-03 15:25:59.571364] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.311 15:25:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:22.311 [2024-11-03 15:25:59.678768] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:24.216 Initializing NVMe Controllers 00:07:24.216 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.216 controller IO queue size 128 less than required 00:07:24.216 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:24.216 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:24.216 Initialization complete. Launching workers. 00:07:24.216 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42558 00:07:24.216 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42619, failed to submit 62 00:07:24.216 success 42559, unsuccessful 60, failed 0 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:24.216 rmmod nvme_rdma 00:07:24.216 rmmod nvme_fabrics 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2111094 ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2111094 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2111094 ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2111094 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2111094 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2111094' 00:07:24.216 killing process with pid 2111094 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2111094 00:07:24.216 15:26:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2111094 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:24.475 00:07:24.475 real 0m10.485s 00:07:24.475 user 0m12.999s 00:07:24.475 sys 0m5.871s 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:24.475 ************************************ 00:07:24.475 END TEST nvmf_abort 00:07:24.475 ************************************ 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.475 ************************************ 00:07:24.475 START TEST nvmf_ns_hotplug_stress 00:07:24.475 ************************************ 00:07:24.475 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:24.735 * Looking for test storage... 00:07:24.735 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.735 --rc genhtml_branch_coverage=1 00:07:24.735 --rc genhtml_function_coverage=1 00:07:24.735 --rc genhtml_legend=1 00:07:24.735 --rc geninfo_all_blocks=1 00:07:24.735 --rc geninfo_unexecuted_blocks=1 00:07:24.735 00:07:24.735 ' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.735 --rc genhtml_branch_coverage=1 00:07:24.735 --rc genhtml_function_coverage=1 00:07:24.735 --rc genhtml_legend=1 00:07:24.735 --rc geninfo_all_blocks=1 00:07:24.735 --rc geninfo_unexecuted_blocks=1 00:07:24.735 00:07:24.735 ' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.735 --rc genhtml_branch_coverage=1 00:07:24.735 --rc genhtml_function_coverage=1 00:07:24.735 --rc genhtml_legend=1 00:07:24.735 --rc geninfo_all_blocks=1 00:07:24.735 --rc geninfo_unexecuted_blocks=1 00:07:24.735 00:07:24.735 ' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.735 --rc genhtml_branch_coverage=1 00:07:24.735 --rc genhtml_function_coverage=1 00:07:24.735 --rc genhtml_legend=1 00:07:24.735 --rc geninfo_all_blocks=1 00:07:24.735 --rc geninfo_unexecuted_blocks=1 00:07:24.735 00:07:24.735 ' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.735 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.736 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.736 15:26:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:32.861 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:32.861 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:32.861 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:32.861 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:32.861 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:32.862 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.862 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:32.862 altname enp217s0f0np0 00:07:32.862 altname ens818f0np0 00:07:32.862 inet 192.168.100.8/24 scope global mlx_0_0 00:07:32.862 valid_lft forever preferred_lft forever 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:32.862 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.862 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:32.862 altname enp217s0f1np1 00:07:32.862 altname ens818f1np1 00:07:32.862 inet 192.168.100.9/24 scope global mlx_0_1 00:07:32.862 valid_lft forever preferred_lft forever 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:32.862 192.168.100.9' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:32.862 192.168.100.9' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:32.862 192.168.100.9' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2115016 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2115016 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2115016 ']' 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.862 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.862 [2024-11-03 15:26:09.646321] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:32.862 [2024-11-03 15:26:09.646380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.862 [2024-11-03 15:26:09.725652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.862 [2024-11-03 15:26:09.748429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.862 [2024-11-03 15:26:09.748466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.862 [2024-11-03 15:26:09.748475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.862 [2024-11-03 15:26:09.748484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.862 [2024-11-03 15:26:09.748491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.862 [2024-11-03 15:26:09.750044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.862 [2024-11-03 15:26:09.750127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.862 [2024-11-03 15:26:09.750129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:32.863 15:26:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:32.863 [2024-11-03 15:26:10.090770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13ed3d0/0x13f1880) succeed. 00:07:32.863 [2024-11-03 15:26:10.099845] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13ee970/0x1432f20) succeed. 00:07:32.863 15:26:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.863 15:26:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:32.863 [2024-11-03 15:26:10.602977] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:32.863 15:26:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:33.121 15:26:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:33.380 Malloc0 00:07:33.380 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:33.639 Delay0 00:07:33.639 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.898 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:33.898 NULL1 00:07:33.898 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:34.157 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2115566 00:07:34.157 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:34.157 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:34.157 15:26:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.535 Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 15:26:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.535 15:26:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:35.535 15:26:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:35.793 true 00:07:35.793 15:26:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:35.793 15:26:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 15:26:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.730 15:26:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:36.730 15:26:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:36.989 true 00:07:36.989 15:26:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:36.989 15:26:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 15:26:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.926 15:26:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:37.926 15:26:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:38.185 true 00:07:38.185 15:26:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:38.185 15:26:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 15:26:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.122 15:26:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:39.122 15:26:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:39.381 true 00:07:39.381 15:26:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:39.381 15:26:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 15:26:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.317 15:26:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:40.317 15:26:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:40.576 true 00:07:40.576 15:26:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:40.576 15:26:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.513 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.513 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:41.513 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:41.772 true 00:07:41.772 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:41.772 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.032 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.291 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:42.291 15:26:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:42.291 true 00:07:42.291 15:26:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:42.291 15:26:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 15:26:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.669 15:26:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:43.669 15:26:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:43.927 true 00:07:43.927 15:26:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:43.928 15:26:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 15:26:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.864 15:26:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:44.864 15:26:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.123 true 00:07:45.123 15:26:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:45.123 15:26:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 15:26:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.058 15:26:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.058 15:26:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.317 true 00:07:46.317 15:26:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:46.317 15:26:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 15:26:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.340 15:26:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:47.340 15:26:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.340 true 00:07:47.340 15:26:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:47.340 15:26:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.277 15:26:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.536 15:26:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:48.536 15:26:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:48.536 true 00:07:48.795 15:26:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:48.795 15:26:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.363 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.622 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:49.622 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:49.881 true 00:07:49.881 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:49.881 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.140 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.140 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:50.140 15:26:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:50.399 true 00:07:50.399 15:26:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:50.399 15:26:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 15:26:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.777 15:26:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:51.777 15:26:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:51.777 true 00:07:52.042 15:26:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:52.042 15:26:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.618 15:26:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.879 15:26:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:52.879 15:26:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:53.137 true 00:07:53.137 15:26:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:53.137 15:26:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 15:26:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.074 15:26:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:54.074 15:26:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:54.334 true 00:07:54.334 15:26:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:54.334 15:26:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 15:26:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.271 15:26:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:55.271 15:26:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:55.530 true 00:07:55.530 15:26:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:55.530 15:26:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 15:26:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.468 15:26:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:56.468 15:26:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:56.727 true 00:07:56.727 15:26:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:56.727 15:26:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.664 15:26:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.664 15:26:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:57.664 15:26:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:57.923 true 00:07:57.923 15:26:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:57.923 15:26:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 15:26:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.860 15:26:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:58.860 15:26:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:59.119 true 00:07:59.119 15:26:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:07:59.119 15:26:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 15:26:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.057 15:26:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:00.057 15:26:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:00.316 true 00:08:00.316 15:26:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:00.316 15:26:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 15:26:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.253 15:26:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:01.253 15:26:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:01.512 true 00:08:01.512 15:26:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:01.512 15:26:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.448 15:26:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.708 15:26:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:02.708 15:26:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:02.708 true 00:08:02.708 15:26:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:02.708 15:26:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.645 15:26:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.904 15:26:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:03.904 15:26:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:03.904 true 00:08:03.904 15:26:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:03.904 15:26:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.841 15:26:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.100 15:26:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:05.100 15:26:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:05.100 true 00:08:05.359 15:26:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:05.359 15:26:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.359 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.618 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:05.618 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:05.877 true 00:08:05.877 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:05.877 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.136 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.136 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:06.136 15:26:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:06.395 true 00:08:06.395 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:06.395 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.654 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.912 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:06.912 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:06.912 Initializing NVMe Controllers 00:08:06.912 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:06.912 Controller IO queue size 128, less than required. 00:08:06.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.912 Controller IO queue size 128, less than required. 00:08:06.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.912 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:06.912 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:06.912 Initialization complete. Launching workers. 00:08:06.912 ======================================================== 00:08:06.912 Latency(us) 00:08:06.912 Device Information : IOPS MiB/s Average min max 00:08:06.912 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5391.13 2.63 21178.34 888.99 1133825.90 00:08:06.912 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34560.09 16.88 3703.55 1549.87 285158.18 00:08:06.912 ======================================================== 00:08:06.913 Total : 39951.22 19.51 6061.65 888.99 1133825.90 00:08:06.913 00:08:06.913 true 00:08:06.913 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2115566 00:08:06.913 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2115566) - No such process 00:08:06.913 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2115566 00:08:06.913 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.172 15:26:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.431 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:07.431 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:07.431 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:07.431 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.431 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:07.690 null0 00:08:07.690 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.690 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.690 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:07.690 null1 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:07.949 null2 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.949 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:08.208 null3 00:08:08.208 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.208 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.208 15:26:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:08.467 null4 00:08:08.467 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.467 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.467 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:08.467 null5 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:08.727 null6 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.727 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:08.986 null7 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2121572 2121574 2121578 2121580 2121583 2121586 2121588 2121591 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.987 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.247 15:26:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.506 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.507 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.766 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.026 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.286 15:26:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.546 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.805 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.064 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.065 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.324 15:26:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.324 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.324 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.324 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.583 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.583 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.583 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.583 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.584 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.844 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.103 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.362 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.362 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.362 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.362 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.621 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.881 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.140 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:13.141 rmmod nvme_rdma 00:08:13.141 rmmod nvme_fabrics 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2115016 ']' 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2115016 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2115016 ']' 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2115016 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.141 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2115016 00:08:13.401 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:13.401 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:13.401 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2115016' 00:08:13.401 killing process with pid 2115016 00:08:13.401 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2115016 00:08:13.401 15:26:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2115016 00:08:13.401 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.401 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:13.401 00:08:13.401 real 0m48.916s 00:08:13.401 user 3m21.622s 00:08:13.401 sys 0m14.422s 00:08:13.401 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.401 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.401 ************************************ 00:08:13.401 END TEST nvmf_ns_hotplug_stress 00:08:13.401 ************************************ 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.660 ************************************ 00:08:13.660 START TEST nvmf_delete_subsystem 00:08:13.660 ************************************ 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:13.660 * Looking for test storage... 00:08:13.660 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.660 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.661 --rc genhtml_branch_coverage=1 00:08:13.661 --rc genhtml_function_coverage=1 00:08:13.661 --rc genhtml_legend=1 00:08:13.661 --rc geninfo_all_blocks=1 00:08:13.661 --rc geninfo_unexecuted_blocks=1 00:08:13.661 00:08:13.661 ' 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.661 --rc genhtml_branch_coverage=1 00:08:13.661 --rc genhtml_function_coverage=1 00:08:13.661 --rc genhtml_legend=1 00:08:13.661 --rc geninfo_all_blocks=1 00:08:13.661 --rc geninfo_unexecuted_blocks=1 00:08:13.661 00:08:13.661 ' 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.661 --rc genhtml_branch_coverage=1 00:08:13.661 --rc genhtml_function_coverage=1 00:08:13.661 --rc genhtml_legend=1 00:08:13.661 --rc geninfo_all_blocks=1 00:08:13.661 --rc geninfo_unexecuted_blocks=1 00:08:13.661 00:08:13.661 ' 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.661 --rc genhtml_branch_coverage=1 00:08:13.661 --rc genhtml_function_coverage=1 00:08:13.661 --rc genhtml_legend=1 00:08:13.661 --rc geninfo_all_blocks=1 00:08:13.661 --rc geninfo_unexecuted_blocks=1 00:08:13.661 00:08:13.661 ' 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.661 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.920 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.921 15:26:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.547 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:20.548 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:20.548 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:20.548 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:20.548 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:20.548 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:20.549 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.549 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:20.549 altname enp217s0f0np0 00:08:20.549 altname ens818f0np0 00:08:20.549 inet 192.168.100.8/24 scope global mlx_0_0 00:08:20.549 valid_lft forever preferred_lft forever 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:20.549 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.549 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:20.549 altname enp217s0f1np1 00:08:20.549 altname ens818f1np1 00:08:20.549 inet 192.168.100.9/24 scope global mlx_0_1 00:08:20.549 valid_lft forever preferred_lft forever 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:20.549 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:20.824 192.168.100.9' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:20.824 192.168.100.9' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:20.824 192.168.100.9' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:20.824 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2125924 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2125924 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2125924 ']' 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.825 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.825 [2024-11-03 15:26:58.464168] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:20.825 [2024-11-03 15:26:58.464224] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.825 [2024-11-03 15:26:58.544775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.825 [2024-11-03 15:26:58.567349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.825 [2024-11-03 15:26:58.567386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.825 [2024-11-03 15:26:58.567396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.825 [2024-11-03 15:26:58.567405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.825 [2024-11-03 15:26:58.567412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.825 [2024-11-03 15:26:58.568628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.825 [2024-11-03 15:26:58.568631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.084 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.084 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:08:21.084 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 [2024-11-03 15:26:58.727946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18a6590/0x18aaa40) succeed. 00:08:21.085 [2024-11-03 15:26:58.736781] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18a7a90/0x18ec0e0) succeed. 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 [2024-11-03 15:26:58.824765] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 NULL1 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 Delay0 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2126016 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:21.085 15:26:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.345 [2024-11-03 15:26:58.938872] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:23.250 15:27:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.250 15:27:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.250 15:27:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 NVMe io qpair process completion error 00:08:24.627 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.627 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:24.627 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2126016 00:08:24.627 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:24.886 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:24.886 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2126016 00:08:24.886 15:27:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 starting I/O failed: -6 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Read completed with error (sct=0, sc=8) 00:08:25.455 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 starting I/O failed: -6 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Read completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.456 Write completed with error (sct=0, sc=8) 00:08:25.457 Read completed with error (sct=0, sc=8) 00:08:25.457 Read completed with error (sct=0, sc=8) 00:08:25.457 Read completed with error (sct=0, sc=8) 00:08:25.457 Read completed with error (sct=0, sc=8) 00:08:25.457 Initializing NVMe Controllers 00:08:25.457 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.457 Controller IO queue size 128, less than required. 00:08:25.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.457 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:25.457 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:25.457 Initialization complete. Launching workers. 00:08:25.457 ======================================================== 00:08:25.457 Latency(us) 00:08:25.457 Device Information : IOPS MiB/s Average min max 00:08:25.457 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.49 0.04 1593473.39 1000068.47 2975381.40 00:08:25.457 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.49 0.04 1595112.26 1000878.49 2976148.09 00:08:25.457 ======================================================== 00:08:25.457 Total : 160.99 0.08 1594292.83 1000068.47 2976148.09 00:08:25.457 00:08:25.457 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:25.457 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2126016 00:08:25.457 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:25.457 [2024-11-03 15:27:03.039833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:25.457 [2024-11-03 15:27:03.039881] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:25.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:26.025 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2126016 00:08:26.026 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2126016) - No such process 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2126016 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2126016 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2126016 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.026 [2024-11-03 15:27:03.555680] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2126976 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:26.026 15:27:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.026 [2024-11-03 15:27:03.652783] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:26.594 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.594 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:26.594 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.852 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.852 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:26.852 15:27:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.419 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.419 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:27.419 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.986 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.986 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:27.986 15:27:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.554 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.554 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:28.554 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.121 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.121 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:29.121 15:27:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.380 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.380 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:29.380 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.947 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.947 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:29.947 15:27:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.514 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.514 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:30.514 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.081 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.081 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:31.081 15:27:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.648 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.648 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:31.648 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.907 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.907 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:31.907 15:27:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.474 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.474 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:32.474 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.040 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.040 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:33.040 15:27:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.040 Initializing NVMe Controllers 00:08:33.040 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.040 Controller IO queue size 128, less than required. 00:08:33.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.040 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:33.040 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:33.040 Initialization complete. Launching workers. 00:08:33.040 ======================================================== 00:08:33.040 Latency(us) 00:08:33.040 Device Information : IOPS MiB/s Average min max 00:08:33.040 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001254.07 1000063.03 1004045.21 00:08:33.040 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002530.84 1000099.41 1006591.45 00:08:33.040 ======================================================== 00:08:33.040 Total : 256.00 0.12 1001892.45 1000063.03 1006591.45 00:08:33.040 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2126976 00:08:33.608 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2126976) - No such process 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2126976 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:33.608 rmmod nvme_rdma 00:08:33.608 rmmod nvme_fabrics 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2125924 ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2125924 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2125924 ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2125924 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2125924 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2125924' 00:08:33.608 killing process with pid 2125924 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2125924 00:08:33.608 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2125924 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:33.867 00:08:33.867 real 0m20.232s 00:08:33.867 user 0m48.983s 00:08:33.867 sys 0m6.550s 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.867 ************************************ 00:08:33.867 END TEST nvmf_delete_subsystem 00:08:33.867 ************************************ 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.867 ************************************ 00:08:33.867 START TEST nvmf_host_management 00:08:33.867 ************************************ 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:33.867 * Looking for test storage... 00:08:33.867 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.867 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.127 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.128 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.128 15:27:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:40.699 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:40.699 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:40.699 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:40.699 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.699 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:40.700 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.700 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:40.700 altname enp217s0f0np0 00:08:40.700 altname ens818f0np0 00:08:40.700 inet 192.168.100.8/24 scope global mlx_0_0 00:08:40.700 valid_lft forever preferred_lft forever 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:40.700 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:40.959 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:40.959 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:40.959 altname enp217s0f1np1 00:08:40.959 altname ens818f1np1 00:08:40.959 inet 192.168.100.9/24 scope global mlx_0_1 00:08:40.959 valid_lft forever preferred_lft forever 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:40.959 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:40.960 192.168.100.9' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:40.960 192.168.100.9' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:40.960 192.168.100.9' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2132087 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2132087 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2132087 ']' 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.960 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.960 [2024-11-03 15:27:18.665468] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:40.960 [2024-11-03 15:27:18.665519] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.960 [2024-11-03 15:27:18.741098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.219 [2024-11-03 15:27:18.764023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.219 [2024-11-03 15:27:18.764062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.219 [2024-11-03 15:27:18.764073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.219 [2024-11-03 15:27:18.764081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.219 [2024-11-03 15:27:18.764091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.219 [2024-11-03 15:27:18.765886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.219 [2024-11-03 15:27:18.765952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.219 [2024-11-03 15:27:18.766044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.219 [2024-11-03 15:27:18.766046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.219 15:27:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.219 [2024-11-03 15:27:18.934398] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e2df50/0x1e32400) succeed. 00:08:41.219 [2024-11-03 15:27:18.944292] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e2f590/0x1e73aa0) succeed. 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.479 Malloc0 00:08:41.479 [2024-11-03 15:27:19.138439] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2132284 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2132284 /var/tmp/bdevperf.sock 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2132284 ']' 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.479 { 00:08:41.479 "params": { 00:08:41.479 "name": "Nvme$subsystem", 00:08:41.479 "trtype": "$TEST_TRANSPORT", 00:08:41.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.479 "adrfam": "ipv4", 00:08:41.479 "trsvcid": "$NVMF_PORT", 00:08:41.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.479 "hdgst": ${hdgst:-false}, 00:08:41.479 "ddgst": ${ddgst:-false} 00:08:41.479 }, 00:08:41.479 "method": "bdev_nvme_attach_controller" 00:08:41.479 } 00:08:41.479 EOF 00:08:41.479 )") 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.479 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.479 "params": { 00:08:41.479 "name": "Nvme0", 00:08:41.479 "trtype": "rdma", 00:08:41.479 "traddr": "192.168.100.8", 00:08:41.479 "adrfam": "ipv4", 00:08:41.479 "trsvcid": "4420", 00:08:41.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.479 "hdgst": false, 00:08:41.479 "ddgst": false 00:08:41.479 }, 00:08:41.479 "method": "bdev_nvme_attach_controller" 00:08:41.479 }' 00:08:41.479 [2024-11-03 15:27:19.241875] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:41.479 [2024-11-03 15:27:19.241925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132284 ] 00:08:41.738 [2024-11-03 15:27:19.321527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.738 [2024-11-03 15:27:19.344131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.738 Running I/O for 10 seconds... 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.998 15:27:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:42.936 264.00 IOPS, 16.50 MiB/s [2024-11-03T14:27:20.726Z] [2024-11-03 15:27:20.646206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.936 [2024-11-03 15:27:20.646243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:51409 cdw0:0 sqhd:30a4 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.646255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.936 [2024-11-03 15:27:20.646264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:51409 cdw0:0 sqhd:30a4 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.646275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.936 [2024-11-03 15:27:20.646284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:51409 cdw0:0 sqhd:30a4 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.646294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.936 [2024-11-03 15:27:20.646303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:51409 cdw0:0 sqhd:30a4 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:42.936 [2024-11-03 15:27:20.648253] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:08:42.936 [2024-11-03 15:27:20.648279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3fa80 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2fa00 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f900 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff880 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef800 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf780 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x181e00 00:08:42.936 [2024-11-03 15:27:20.648749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x182b00 00:08:42.936 [2024-11-03 15:27:20.648772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x182b00 00:08:42.936 [2024-11-03 15:27:20.648795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 [2024-11-03 15:27:20.648808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x181f00 00:08:42.936 [2024-11-03 15:27:20.648817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:51409 cdw0:fa2e2000 sqhd:35d6 p:1 m:0 dnr:0 00:08:42.936 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2132284 00:08:42.936 [2024-11-03 15:27:20.648829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fc7000 len:0x10000 key:0x182900 00:08:42.936 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:42.936 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:42.936 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:42.936 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.937 { 00:08:42.937 "params": { 00:08:42.937 "name": "Nvme$subsystem", 00:08:42.937 "trtype": "$TEST_TRANSPORT", 00:08:42.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.937 "adrfam": "ipv4", 00:08:42.937 "trsvcid": "$NVMF_PORT", 00:08:42.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.937 "hdgst": ${hdgst:-false}, 00:08:42.937 "ddgst": ${ddgst:-false} 00:08:42.937 }, 00:08:42.937 "method": "bdev_nvme_attach_controller" 00:08:42.937 } 00:08:42.937 EOF 00:08:42.937 )") 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:42.937 15:27:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.937 "params": { 00:08:42.937 "name": "Nvme0", 00:08:42.937 "trtype": "rdma", 00:08:42.937 "traddr": "192.168.100.8", 00:08:42.937 "adrfam": "ipv4", 00:08:42.937 "trsvcid": "4420", 00:08:42.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:42.937 "hdgst": false, 00:08:42.937 "ddgst": false 00:08:42.937 }, 00:08:42.937 "method": "bdev_nvme_attach_controller" 00:08:42.937 }' 00:08:42.937 [2024-11-03 15:27:20.698085] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:42.937 [2024-11-03 15:27:20.698138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132534 ] 00:08:43.196 [2024-11-03 15:27:20.775957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.196 [2024-11-03 15:27:20.798275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.196 Running I/O for 1 seconds... 00:08:44.574 3072.00 IOPS, 192.00 MiB/s 00:08:44.574 Latency(us) 00:08:44.574 [2024-11-03T14:27:22.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:44.574 Verification LBA range: start 0x0 length 0x400 00:08:44.574 Nvme0n1 : 1.02 3123.93 195.25 0.00 0.00 20074.32 632.42 39426.46 00:08:44.574 [2024-11-03T14:27:22.364Z] =================================================================================================================== 00:08:44.574 [2024-11-03T14:27:22.364Z] Total : 3123.93 195.25 0.00 0.00 20074.32 632.42 39426.46 00:08:44.574 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2132284 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:44.574 rmmod nvme_rdma 00:08:44.574 rmmod nvme_fabrics 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2132087 ']' 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2132087 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2132087 ']' 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2132087 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2132087 00:08:44.574 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:44.575 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:44.575 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2132087' 00:08:44.575 killing process with pid 2132087 00:08:44.575 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2132087 00:08:44.575 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2132087 00:08:44.834 [2024-11-03 15:27:22.519465] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:44.834 00:08:44.834 real 0m10.993s 00:08:44.834 user 0m19.559s 00:08:44.834 sys 0m6.271s 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.834 ************************************ 00:08:44.834 END TEST nvmf_host_management 00:08:44.834 ************************************ 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.834 15:27:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.093 ************************************ 00:08:45.093 START TEST nvmf_lvol 00:08:45.093 ************************************ 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:45.093 * Looking for test storage... 00:08:45.093 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.093 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.094 --rc genhtml_branch_coverage=1 00:08:45.094 --rc genhtml_function_coverage=1 00:08:45.094 --rc genhtml_legend=1 00:08:45.094 --rc geninfo_all_blocks=1 00:08:45.094 --rc geninfo_unexecuted_blocks=1 00:08:45.094 00:08:45.094 ' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.094 --rc genhtml_branch_coverage=1 00:08:45.094 --rc genhtml_function_coverage=1 00:08:45.094 --rc genhtml_legend=1 00:08:45.094 --rc geninfo_all_blocks=1 00:08:45.094 --rc geninfo_unexecuted_blocks=1 00:08:45.094 00:08:45.094 ' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.094 --rc genhtml_branch_coverage=1 00:08:45.094 --rc genhtml_function_coverage=1 00:08:45.094 --rc genhtml_legend=1 00:08:45.094 --rc geninfo_all_blocks=1 00:08:45.094 --rc geninfo_unexecuted_blocks=1 00:08:45.094 00:08:45.094 ' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.094 --rc genhtml_branch_coverage=1 00:08:45.094 --rc genhtml_function_coverage=1 00:08:45.094 --rc genhtml_legend=1 00:08:45.094 --rc geninfo_all_blocks=1 00:08:45.094 --rc geninfo_unexecuted_blocks=1 00:08:45.094 00:08:45.094 ' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.094 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.094 15:27:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.663 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:51.664 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:51.664 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:51.664 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:51.664 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:51.664 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.664 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:51.664 altname enp217s0f0np0 00:08:51.664 altname ens818f0np0 00:08:51.664 inet 192.168.100.8/24 scope global mlx_0_0 00:08:51.664 valid_lft forever preferred_lft forever 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:51.664 15:27:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:51.664 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:51.664 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:51.665 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.665 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:51.665 altname enp217s0f1np1 00:08:51.665 altname ens818f1np1 00:08:51.665 inet 192.168.100.9/24 scope global mlx_0_1 00:08:51.665 valid_lft forever preferred_lft forever 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:51.665 192.168.100.9' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:51.665 192.168.100.9' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:51.665 192.168.100.9' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2136113 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2136113 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2136113 ']' 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 [2024-11-03 15:27:29.178456] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:51.665 [2024-11-03 15:27:29.178512] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.665 [2024-11-03 15:27:29.255063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.665 [2024-11-03 15:27:29.276211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.665 [2024-11-03 15:27:29.276253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.665 [2024-11-03 15:27:29.276262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.665 [2024-11-03 15:27:29.276270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.665 [2024-11-03 15:27:29.276277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.665 [2024-11-03 15:27:29.277825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.665 [2024-11-03 15:27:29.277924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.665 [2024-11-03 15:27:29.277926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.665 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:51.925 [2024-11-03 15:27:29.602062] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc7d0d0/0xc81580) succeed. 00:08:51.925 [2024-11-03 15:27:29.610887] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc7e670/0xcc2c20) succeed. 00:08:52.184 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.184 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:52.184 15:27:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.443 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:52.443 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:52.702 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:52.961 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=83c45751-2403-4d37-a7ce-49c9f2f9d5ae 00:08:52.961 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 83c45751-2403-4d37-a7ce-49c9f2f9d5ae lvol 20 00:08:52.961 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9fab267b-49fa-46f8-a878-c8e30b284b00 00:08:52.961 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.220 15:27:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fab267b-49fa-46f8-a878-c8e30b284b00 00:08:53.479 15:27:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:53.738 [2024-11-03 15:27:31.298733] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.738 15:27:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:53.738 15:27:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2136573 00:08:53.738 15:27:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:53.738 15:27:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:55.115 15:27:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9fab267b-49fa-46f8-a878-c8e30b284b00 MY_SNAPSHOT 00:08:55.115 15:27:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=29741fb0-490d-494a-a90a-cc421b5c5389 00:08:55.115 15:27:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9fab267b-49fa-46f8-a878-c8e30b284b00 30 00:08:55.374 15:27:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 29741fb0-490d-494a-a90a-cc421b5c5389 MY_CLONE 00:08:55.374 15:27:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6795391b-21e7-4275-91ce-701af464b523 00:08:55.374 15:27:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6795391b-21e7-4275-91ce-701af464b523 00:08:55.633 15:27:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2136573 00:09:05.735 Initializing NVMe Controllers 00:09:05.735 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:05.735 Controller IO queue size 128, less than required. 00:09:05.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:05.735 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:05.735 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:05.735 Initialization complete. Launching workers. 00:09:05.735 ======================================================== 00:09:05.735 Latency(us) 00:09:05.735 Device Information : IOPS MiB/s Average min max 00:09:05.735 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16342.10 63.84 7833.56 2346.17 45410.12 00:09:05.735 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16293.10 63.64 7857.05 3233.29 50693.83 00:09:05.735 ======================================================== 00:09:05.735 Total : 32635.20 127.48 7845.29 2346.17 50693.83 00:09:05.735 00:09:05.735 15:27:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fab267b-49fa-46f8-a878-c8e30b284b00 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83c45751-2403-4d37-a7ce-49c9f2f9d5ae 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.735 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:05.735 rmmod nvme_rdma 00:09:05.995 rmmod nvme_fabrics 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2136113 ']' 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2136113 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2136113 ']' 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2136113 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2136113 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2136113' 00:09:05.995 killing process with pid 2136113 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2136113 00:09:05.995 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2136113 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:06.255 00:09:06.255 real 0m21.256s 00:09:06.255 user 1m10.181s 00:09:06.255 sys 0m5.962s 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:06.255 ************************************ 00:09:06.255 END TEST nvmf_lvol 00:09:06.255 ************************************ 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.255 ************************************ 00:09:06.255 START TEST nvmf_lvs_grow 00:09:06.255 ************************************ 00:09:06.255 15:27:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:06.255 * Looking for test storage... 00:09:06.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:06.255 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.255 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.255 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.515 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.516 --rc geninfo_all_blocks=1 00:09:06.516 --rc geninfo_unexecuted_blocks=1 00:09:06.516 00:09:06.516 ' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.516 --rc geninfo_all_blocks=1 00:09:06.516 --rc geninfo_unexecuted_blocks=1 00:09:06.516 00:09:06.516 ' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.516 --rc geninfo_all_blocks=1 00:09:06.516 --rc geninfo_unexecuted_blocks=1 00:09:06.516 00:09:06.516 ' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.516 --rc geninfo_all_blocks=1 00:09:06.516 --rc geninfo_unexecuted_blocks=1 00:09:06.516 00:09:06.516 ' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.516 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.516 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.517 15:27:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:13.094 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:13.094 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:13.094 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:13.094 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:13.094 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:13.095 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:13.095 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:13.095 altname enp217s0f0np0 00:09:13.095 altname ens818f0np0 00:09:13.095 inet 192.168.100.8/24 scope global mlx_0_0 00:09:13.095 valid_lft forever preferred_lft forever 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:13.095 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:13.095 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:13.095 altname enp217s0f1np1 00:09:13.095 altname ens818f1np1 00:09:13.095 inet 192.168.100.9/24 scope global mlx_0_1 00:09:13.095 valid_lft forever preferred_lft forever 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:13.095 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:13.096 192.168.100.9' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:13.096 192.168.100.9' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:13.096 192.168.100.9' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2141991 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2141991 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2141991 ']' 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.096 [2024-11-03 15:27:50.478488] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:13.096 [2024-11-03 15:27:50.478544] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.096 [2024-11-03 15:27:50.558421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.096 [2024-11-03 15:27:50.580310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.096 [2024-11-03 15:27:50.580347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.096 [2024-11-03 15:27:50.580358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.096 [2024-11-03 15:27:50.580367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.096 [2024-11-03 15:27:50.580374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.096 [2024-11-03 15:27:50.580977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.096 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:13.357 [2024-11-03 15:27:50.908926] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fcec50/0x1fd3100) succeed. 00:09:13.357 [2024-11-03 15:27:50.917998] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd00b0/0x20147a0) succeed. 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.357 ************************************ 00:09:13.357 START TEST lvs_grow_clean 00:09:13.357 ************************************ 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:13.357 15:27:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:13.357 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.357 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.357 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.616 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:13.617 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:13.876 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=93018008-8591-410a-9fcc-994b7310f9b2 00:09:13.876 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:13.876 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:13.877 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:13.877 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:13.877 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93018008-8591-410a-9fcc-994b7310f9b2 lvol 150 00:09:14.136 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=51de7ead-4e6c-4fb4-8d09-91aa31b2698b 00:09:14.136 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.136 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:14.396 [2024-11-03 15:27:51.950453] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:14.396 [2024-11-03 15:27:51.950505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:14.396 true 00:09:14.396 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:14.396 15:27:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:14.396 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:14.396 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:14.665 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 51de7ead-4e6c-4fb4-8d09-91aa31b2698b 00:09:14.924 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:14.924 [2024-11-03 15:27:52.680820] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:14.924 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2142326 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2142326 /var/tmp/bdevperf.sock 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2142326 ']' 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.185 15:27:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:15.185 [2024-11-03 15:27:52.918479] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:15.185 [2024-11-03 15:27:52.918532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142326 ] 00:09:15.445 [2024-11-03 15:27:52.997155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.445 [2024-11-03 15:27:53.019854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.445 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.445 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:09:15.445 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.704 Nvme0n1 00:09:15.704 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.964 [ 00:09:15.964 { 00:09:15.964 "name": "Nvme0n1", 00:09:15.964 "aliases": [ 00:09:15.964 "51de7ead-4e6c-4fb4-8d09-91aa31b2698b" 00:09:15.964 ], 00:09:15.964 "product_name": "NVMe disk", 00:09:15.964 "block_size": 4096, 00:09:15.964 "num_blocks": 38912, 00:09:15.964 "uuid": "51de7ead-4e6c-4fb4-8d09-91aa31b2698b", 00:09:15.964 "numa_id": 1, 00:09:15.964 "assigned_rate_limits": { 00:09:15.964 "rw_ios_per_sec": 0, 00:09:15.964 "rw_mbytes_per_sec": 0, 00:09:15.964 "r_mbytes_per_sec": 0, 00:09:15.964 "w_mbytes_per_sec": 0 00:09:15.964 }, 00:09:15.964 "claimed": false, 00:09:15.964 "zoned": false, 00:09:15.964 "supported_io_types": { 00:09:15.964 "read": true, 00:09:15.964 "write": true, 00:09:15.964 "unmap": true, 00:09:15.964 "flush": true, 00:09:15.964 "reset": true, 00:09:15.964 "nvme_admin": true, 00:09:15.964 "nvme_io": true, 00:09:15.964 "nvme_io_md": false, 00:09:15.964 "write_zeroes": true, 00:09:15.964 "zcopy": false, 00:09:15.964 "get_zone_info": false, 00:09:15.964 "zone_management": false, 00:09:15.964 "zone_append": false, 00:09:15.964 "compare": true, 00:09:15.964 "compare_and_write": true, 00:09:15.964 "abort": true, 00:09:15.964 "seek_hole": false, 00:09:15.964 "seek_data": false, 00:09:15.964 "copy": true, 00:09:15.964 "nvme_iov_md": false 00:09:15.964 }, 00:09:15.964 "memory_domains": [ 00:09:15.964 { 00:09:15.964 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:15.964 "dma_device_type": 0 00:09:15.964 } 00:09:15.964 ], 00:09:15.964 "driver_specific": { 00:09:15.964 "nvme": [ 00:09:15.964 { 00:09:15.964 "trid": { 00:09:15.964 "trtype": "RDMA", 00:09:15.964 "adrfam": "IPv4", 00:09:15.964 "traddr": "192.168.100.8", 00:09:15.964 "trsvcid": "4420", 00:09:15.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.964 }, 00:09:15.964 "ctrlr_data": { 00:09:15.964 "cntlid": 1, 00:09:15.964 "vendor_id": "0x8086", 00:09:15.964 "model_number": "SPDK bdev Controller", 00:09:15.964 "serial_number": "SPDK0", 00:09:15.964 "firmware_revision": "25.01", 00:09:15.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.964 "oacs": { 00:09:15.964 "security": 0, 00:09:15.964 "format": 0, 00:09:15.964 "firmware": 0, 00:09:15.964 "ns_manage": 0 00:09:15.964 }, 00:09:15.964 "multi_ctrlr": true, 00:09:15.964 "ana_reporting": false 00:09:15.964 }, 00:09:15.964 "vs": { 00:09:15.964 "nvme_version": "1.3" 00:09:15.964 }, 00:09:15.964 "ns_data": { 00:09:15.964 "id": 1, 00:09:15.964 "can_share": true 00:09:15.964 } 00:09:15.964 } 00:09:15.964 ], 00:09:15.964 "mp_policy": "active_passive" 00:09:15.965 } 00:09:15.965 } 00:09:15.965 ] 00:09:15.965 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2142575 00:09:15.965 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.965 15:27:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.965 Running I/O for 10 seconds... 00:09:16.903 Latency(us) 00:09:16.904 [2024-11-03T14:27:54.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.904 Nvme0n1 : 1.00 34561.00 135.00 0.00 0.00 0.00 0.00 0.00 00:09:16.904 [2024-11-03T14:27:54.694Z] =================================================================================================================== 00:09:16.904 [2024-11-03T14:27:54.694Z] Total : 34561.00 135.00 0.00 0.00 0.00 0.00 0.00 00:09:16.904 00:09:17.842 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:18.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.102 Nvme0n1 : 2.00 34959.50 136.56 0.00 0.00 0.00 0.00 0.00 00:09:18.102 [2024-11-03T14:27:55.892Z] =================================================================================================================== 00:09:18.102 [2024-11-03T14:27:55.892Z] Total : 34959.50 136.56 0.00 0.00 0.00 0.00 0.00 00:09:18.102 00:09:18.102 true 00:09:18.102 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:18.102 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.362 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.362 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.362 15:27:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2142575 00:09:18.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.931 Nvme0n1 : 3.00 35095.33 137.09 0.00 0.00 0.00 0.00 0.00 00:09:18.931 [2024-11-03T14:27:56.721Z] =================================================================================================================== 00:09:18.931 [2024-11-03T14:27:56.721Z] Total : 35095.33 137.09 0.00 0.00 0.00 0.00 0.00 00:09:18.931 00:09:19.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.869 Nvme0n1 : 4.00 35247.75 137.69 0.00 0.00 0.00 0.00 0.00 00:09:19.869 [2024-11-03T14:27:57.659Z] =================================================================================================================== 00:09:19.869 [2024-11-03T14:27:57.659Z] Total : 35247.75 137.69 0.00 0.00 0.00 0.00 0.00 00:09:19.869 00:09:21.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.247 Nvme0n1 : 5.00 35334.20 138.02 0.00 0.00 0.00 0.00 0.00 00:09:21.247 [2024-11-03T14:27:59.037Z] =================================================================================================================== 00:09:21.247 [2024-11-03T14:27:59.037Z] Total : 35334.20 138.02 0.00 0.00 0.00 0.00 0.00 00:09:21.247 00:09:22.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.185 Nvme0n1 : 6.00 35392.67 138.25 0.00 0.00 0.00 0.00 0.00 00:09:22.185 [2024-11-03T14:27:59.975Z] =================================================================================================================== 00:09:22.185 [2024-11-03T14:27:59.975Z] Total : 35392.67 138.25 0.00 0.00 0.00 0.00 0.00 00:09:22.185 00:09:23.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.124 Nvme0n1 : 7.00 35346.14 138.07 0.00 0.00 0.00 0.00 0.00 00:09:23.124 [2024-11-03T14:28:00.914Z] =================================================================================================================== 00:09:23.124 [2024-11-03T14:28:00.914Z] Total : 35346.14 138.07 0.00 0.00 0.00 0.00 0.00 00:09:23.124 00:09:24.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.063 Nvme0n1 : 8.00 35379.88 138.20 0.00 0.00 0.00 0.00 0.00 00:09:24.063 [2024-11-03T14:28:01.853Z] =================================================================================================================== 00:09:24.063 [2024-11-03T14:28:01.853Z] Total : 35379.88 138.20 0.00 0.00 0.00 0.00 0.00 00:09:24.063 00:09:25.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.001 Nvme0n1 : 9.00 35412.78 138.33 0.00 0.00 0.00 0.00 0.00 00:09:25.001 [2024-11-03T14:28:02.791Z] =================================================================================================================== 00:09:25.001 [2024-11-03T14:28:02.791Z] Total : 35412.78 138.33 0.00 0.00 0.00 0.00 0.00 00:09:25.001 00:09:25.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.940 Nvme0n1 : 10.00 35443.60 138.45 0.00 0.00 0.00 0.00 0.00 00:09:25.940 [2024-11-03T14:28:03.730Z] =================================================================================================================== 00:09:25.940 [2024-11-03T14:28:03.730Z] Total : 35443.60 138.45 0.00 0.00 0.00 0.00 0.00 00:09:25.940 00:09:25.940 00:09:25.940 Latency(us) 00:09:25.940 [2024-11-03T14:28:03.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.940 Nvme0n1 : 10.00 35442.93 138.45 0.00 0.00 3608.60 2700.08 15728.64 00:09:25.940 [2024-11-03T14:28:03.730Z] =================================================================================================================== 00:09:25.940 [2024-11-03T14:28:03.730Z] Total : 35442.93 138.45 0.00 0.00 3608.60 2700.08 15728.64 00:09:25.940 { 00:09:25.940 "results": [ 00:09:25.940 { 00:09:25.940 "job": "Nvme0n1", 00:09:25.940 "core_mask": "0x2", 00:09:25.940 "workload": "randwrite", 00:09:25.940 "status": "finished", 00:09:25.940 "queue_depth": 128, 00:09:25.940 "io_size": 4096, 00:09:25.940 "runtime": 10.002926, 00:09:25.940 "iops": 35442.929398857894, 00:09:25.940 "mibps": 138.44894296428865, 00:09:25.940 "io_failed": 0, 00:09:25.940 "io_timeout": 0, 00:09:25.940 "avg_latency_us": 3608.602394500935, 00:09:25.940 "min_latency_us": 2700.0832, 00:09:25.940 "max_latency_us": 15728.64 00:09:25.940 } 00:09:25.940 ], 00:09:25.940 "core_count": 1 00:09:25.940 } 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2142326 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2142326 ']' 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2142326 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.940 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2142326 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2142326' 00:09:26.200 killing process with pid 2142326 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2142326 00:09:26.200 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.200 00:09:26.200 Latency(us) 00:09:26.200 [2024-11-03T14:28:03.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.200 [2024-11-03T14:28:03.990Z] =================================================================================================================== 00:09:26.200 [2024-11-03T14:28:03.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2142326 00:09:26.200 15:28:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:26.460 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.718 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:26.718 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:26.718 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:26.718 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:26.718 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.977 [2024-11-03 15:28:04.664887] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.977 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:27.237 request: 00:09:27.237 { 00:09:27.237 "uuid": "93018008-8591-410a-9fcc-994b7310f9b2", 00:09:27.237 "method": "bdev_lvol_get_lvstores", 00:09:27.237 "req_id": 1 00:09:27.237 } 00:09:27.237 Got JSON-RPC error response 00:09:27.237 response: 00:09:27.237 { 00:09:27.237 "code": -19, 00:09:27.237 "message": "No such device" 00:09:27.237 } 00:09:27.237 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:27.237 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.237 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.237 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.237 15:28:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.497 aio_bdev 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 51de7ead-4e6c-4fb4-8d09-91aa31b2698b 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=51de7ead-4e6c-4fb4-8d09-91aa31b2698b 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.497 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 51de7ead-4e6c-4fb4-8d09-91aa31b2698b -t 2000 00:09:27.757 [ 00:09:27.757 { 00:09:27.757 "name": "51de7ead-4e6c-4fb4-8d09-91aa31b2698b", 00:09:27.757 "aliases": [ 00:09:27.757 "lvs/lvol" 00:09:27.757 ], 00:09:27.757 "product_name": "Logical Volume", 00:09:27.757 "block_size": 4096, 00:09:27.757 "num_blocks": 38912, 00:09:27.757 "uuid": "51de7ead-4e6c-4fb4-8d09-91aa31b2698b", 00:09:27.757 "assigned_rate_limits": { 00:09:27.757 "rw_ios_per_sec": 0, 00:09:27.757 "rw_mbytes_per_sec": 0, 00:09:27.757 "r_mbytes_per_sec": 0, 00:09:27.757 "w_mbytes_per_sec": 0 00:09:27.757 }, 00:09:27.757 "claimed": false, 00:09:27.757 "zoned": false, 00:09:27.757 "supported_io_types": { 00:09:27.757 "read": true, 00:09:27.757 "write": true, 00:09:27.757 "unmap": true, 00:09:27.757 "flush": false, 00:09:27.757 "reset": true, 00:09:27.757 "nvme_admin": false, 00:09:27.757 "nvme_io": false, 00:09:27.757 "nvme_io_md": false, 00:09:27.757 "write_zeroes": true, 00:09:27.757 "zcopy": false, 00:09:27.757 "get_zone_info": false, 00:09:27.757 "zone_management": false, 00:09:27.757 "zone_append": false, 00:09:27.757 "compare": false, 00:09:27.757 "compare_and_write": false, 00:09:27.757 "abort": false, 00:09:27.757 "seek_hole": true, 00:09:27.757 "seek_data": true, 00:09:27.757 "copy": false, 00:09:27.757 "nvme_iov_md": false 00:09:27.757 }, 00:09:27.757 "driver_specific": { 00:09:27.757 "lvol": { 00:09:27.757 "lvol_store_uuid": "93018008-8591-410a-9fcc-994b7310f9b2", 00:09:27.757 "base_bdev": "aio_bdev", 00:09:27.757 "thin_provision": false, 00:09:27.757 "num_allocated_clusters": 38, 00:09:27.757 "snapshot": false, 00:09:27.757 "clone": false, 00:09:27.757 "esnap_clone": false 00:09:27.757 } 00:09:27.757 } 00:09:27.757 } 00:09:27.757 ] 00:09:27.757 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:09:27.757 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:27.757 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:28.017 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:28.017 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:28.017 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.277 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.277 15:28:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 51de7ead-4e6c-4fb4-8d09-91aa31b2698b 00:09:28.277 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93018008-8591-410a-9fcc-994b7310f9b2 00:09:28.537 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.797 00:09:28.797 real 0m15.439s 00:09:28.797 user 0m15.181s 00:09:28.797 sys 0m1.254s 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:28.797 ************************************ 00:09:28.797 END TEST lvs_grow_clean 00:09:28.797 ************************************ 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.797 ************************************ 00:09:28.797 START TEST lvs_grow_dirty 00:09:28.797 ************************************ 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.797 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.057 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.057 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.316 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:29.316 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.316 15:28:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 lvol 150 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.576 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.835 [2024-11-03 15:28:07.484895] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.835 [2024-11-03 15:28:07.484948] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.835 true 00:09:29.835 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:29.835 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.095 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.095 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.095 15:28:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:30.355 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:30.614 [2024-11-03 15:28:08.219248] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.614 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2145064 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2145064 /var/tmp/bdevperf.sock 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2145064 ']' 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 [2024-11-03 15:28:08.447726] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:30.874 [2024-11-03 15:28:08.447778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145064 ] 00:09:30.874 [2024-11-03 15:28:08.526177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.874 [2024-11-03 15:28:08.548882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:30.874 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.132 Nvme0n1 00:09:31.132 15:28:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:31.391 [ 00:09:31.391 { 00:09:31.391 "name": "Nvme0n1", 00:09:31.391 "aliases": [ 00:09:31.391 "ea64ac92-cb9c-42b2-9aca-c5746812a759" 00:09:31.391 ], 00:09:31.391 "product_name": "NVMe disk", 00:09:31.391 "block_size": 4096, 00:09:31.391 "num_blocks": 38912, 00:09:31.391 "uuid": "ea64ac92-cb9c-42b2-9aca-c5746812a759", 00:09:31.391 "numa_id": 1, 00:09:31.391 "assigned_rate_limits": { 00:09:31.391 "rw_ios_per_sec": 0, 00:09:31.391 "rw_mbytes_per_sec": 0, 00:09:31.391 "r_mbytes_per_sec": 0, 00:09:31.391 "w_mbytes_per_sec": 0 00:09:31.391 }, 00:09:31.391 "claimed": false, 00:09:31.391 "zoned": false, 00:09:31.391 "supported_io_types": { 00:09:31.391 "read": true, 00:09:31.391 "write": true, 00:09:31.391 "unmap": true, 00:09:31.391 "flush": true, 00:09:31.391 "reset": true, 00:09:31.391 "nvme_admin": true, 00:09:31.391 "nvme_io": true, 00:09:31.391 "nvme_io_md": false, 00:09:31.391 "write_zeroes": true, 00:09:31.391 "zcopy": false, 00:09:31.391 "get_zone_info": false, 00:09:31.391 "zone_management": false, 00:09:31.391 "zone_append": false, 00:09:31.391 "compare": true, 00:09:31.391 "compare_and_write": true, 00:09:31.391 "abort": true, 00:09:31.391 "seek_hole": false, 00:09:31.391 "seek_data": false, 00:09:31.391 "copy": true, 00:09:31.391 "nvme_iov_md": false 00:09:31.391 }, 00:09:31.391 "memory_domains": [ 00:09:31.391 { 00:09:31.391 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:31.391 "dma_device_type": 0 00:09:31.391 } 00:09:31.391 ], 00:09:31.391 "driver_specific": { 00:09:31.391 "nvme": [ 00:09:31.391 { 00:09:31.391 "trid": { 00:09:31.391 "trtype": "RDMA", 00:09:31.391 "adrfam": "IPv4", 00:09:31.391 "traddr": "192.168.100.8", 00:09:31.391 "trsvcid": "4420", 00:09:31.391 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:31.391 }, 00:09:31.391 "ctrlr_data": { 00:09:31.391 "cntlid": 1, 00:09:31.391 "vendor_id": "0x8086", 00:09:31.391 "model_number": "SPDK bdev Controller", 00:09:31.391 "serial_number": "SPDK0", 00:09:31.391 "firmware_revision": "25.01", 00:09:31.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.391 "oacs": { 00:09:31.391 "security": 0, 00:09:31.391 "format": 0, 00:09:31.391 "firmware": 0, 00:09:31.391 "ns_manage": 0 00:09:31.391 }, 00:09:31.391 "multi_ctrlr": true, 00:09:31.391 "ana_reporting": false 00:09:31.391 }, 00:09:31.391 "vs": { 00:09:31.391 "nvme_version": "1.3" 00:09:31.391 }, 00:09:31.391 "ns_data": { 00:09:31.391 "id": 1, 00:09:31.391 "can_share": true 00:09:31.391 } 00:09:31.391 } 00:09:31.391 ], 00:09:31.391 "mp_policy": "active_passive" 00:09:31.391 } 00:09:31.391 } 00:09:31.391 ] 00:09:31.391 15:28:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2145304 00:09:31.391 15:28:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:31.391 15:28:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.650 Running I/O for 10 seconds... 00:09:32.589 Latency(us) 00:09:32.589 [2024-11-03T14:28:10.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.589 Nvme0n1 : 1.00 34275.00 133.89 0.00 0.00 0.00 0.00 0.00 00:09:32.589 [2024-11-03T14:28:10.379Z] =================================================================================================================== 00:09:32.589 [2024-11-03T14:28:10.379Z] Total : 34275.00 133.89 0.00 0.00 0.00 0.00 0.00 00:09:32.589 00:09:33.527 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:33.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.527 Nvme0n1 : 2.00 34639.50 135.31 0.00 0.00 0.00 0.00 0.00 00:09:33.527 [2024-11-03T14:28:11.317Z] =================================================================================================================== 00:09:33.527 [2024-11-03T14:28:11.317Z] Total : 34639.50 135.31 0.00 0.00 0.00 0.00 0.00 00:09:33.527 00:09:33.527 true 00:09:33.527 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:33.527 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:33.786 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:33.786 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:33.786 15:28:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2145304 00:09:34.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.724 Nvme0n1 : 3.00 34911.67 136.37 0.00 0.00 0.00 0.00 0.00 00:09:34.724 [2024-11-03T14:28:12.514Z] =================================================================================================================== 00:09:34.724 [2024-11-03T14:28:12.514Z] Total : 34911.67 136.37 0.00 0.00 0.00 0.00 0.00 00:09:34.724 00:09:35.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.661 Nvme0n1 : 4.00 35103.50 137.12 0.00 0.00 0.00 0.00 0.00 00:09:35.661 [2024-11-03T14:28:13.451Z] =================================================================================================================== 00:09:35.661 [2024-11-03T14:28:13.451Z] Total : 35103.50 137.12 0.00 0.00 0.00 0.00 0.00 00:09:35.661 00:09:36.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.600 Nvme0n1 : 5.00 35218.60 137.57 0.00 0.00 0.00 0.00 0.00 00:09:36.600 [2024-11-03T14:28:14.390Z] =================================================================================================================== 00:09:36.600 [2024-11-03T14:28:14.390Z] Total : 35218.60 137.57 0.00 0.00 0.00 0.00 0.00 00:09:36.600 00:09:37.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.538 Nvme0n1 : 6.00 35305.50 137.91 0.00 0.00 0.00 0.00 0.00 00:09:37.538 [2024-11-03T14:28:15.328Z] =================================================================================================================== 00:09:37.538 [2024-11-03T14:28:15.328Z] Total : 35305.50 137.91 0.00 0.00 0.00 0.00 0.00 00:09:37.538 00:09:38.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.477 Nvme0n1 : 7.00 35370.29 138.17 0.00 0.00 0.00 0.00 0.00 00:09:38.477 [2024-11-03T14:28:16.267Z] =================================================================================================================== 00:09:38.477 [2024-11-03T14:28:16.267Z] Total : 35370.29 138.17 0.00 0.00 0.00 0.00 0.00 00:09:38.477 00:09:39.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.605 Nvme0n1 : 8.00 35425.00 138.38 0.00 0.00 0.00 0.00 0.00 00:09:39.605 [2024-11-03T14:28:17.395Z] =================================================================================================================== 00:09:39.605 [2024-11-03T14:28:17.395Z] Total : 35425.00 138.38 0.00 0.00 0.00 0.00 0.00 00:09:39.605 00:09:40.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.544 Nvme0n1 : 9.00 35462.33 138.52 0.00 0.00 0.00 0.00 0.00 00:09:40.544 [2024-11-03T14:28:18.334Z] =================================================================================================================== 00:09:40.544 [2024-11-03T14:28:18.334Z] Total : 35462.33 138.52 0.00 0.00 0.00 0.00 0.00 00:09:40.544 00:09:41.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.483 Nvme0n1 : 10.00 35504.70 138.69 0.00 0.00 0.00 0.00 0.00 00:09:41.483 [2024-11-03T14:28:19.273Z] =================================================================================================================== 00:09:41.483 [2024-11-03T14:28:19.273Z] Total : 35504.70 138.69 0.00 0.00 0.00 0.00 0.00 00:09:41.483 00:09:41.483 00:09:41.483 Latency(us) 00:09:41.483 [2024-11-03T14:28:19.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.483 Nvme0n1 : 10.00 35504.99 138.69 0.00 0.00 3602.28 2319.97 17720.93 00:09:41.483 [2024-11-03T14:28:19.273Z] =================================================================================================================== 00:09:41.483 [2024-11-03T14:28:19.273Z] Total : 35504.99 138.69 0.00 0.00 3602.28 2319.97 17720.93 00:09:41.483 { 00:09:41.483 "results": [ 00:09:41.483 { 00:09:41.483 "job": "Nvme0n1", 00:09:41.483 "core_mask": "0x2", 00:09:41.483 "workload": "randwrite", 00:09:41.483 "status": "finished", 00:09:41.483 "queue_depth": 128, 00:09:41.483 "io_size": 4096, 00:09:41.483 "runtime": 10.00313, 00:09:41.484 "iops": 35504.98693908806, 00:09:41.484 "mibps": 138.69135523081275, 00:09:41.484 "io_failed": 0, 00:09:41.484 "io_timeout": 0, 00:09:41.484 "avg_latency_us": 3602.276239186172, 00:09:41.484 "min_latency_us": 2319.9744, 00:09:41.484 "max_latency_us": 17720.9344 00:09:41.484 } 00:09:41.484 ], 00:09:41.484 "core_count": 1 00:09:41.484 } 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2145064 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2145064 ']' 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2145064 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.484 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2145064 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2145064' 00:09:41.743 killing process with pid 2145064 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2145064 00:09:41.743 Received shutdown signal, test time was about 10.000000 seconds 00:09:41.743 00:09:41.743 Latency(us) 00:09:41.743 [2024-11-03T14:28:19.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.743 [2024-11-03T14:28:19.533Z] =================================================================================================================== 00:09:41.743 [2024-11-03T14:28:19.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2145064 00:09:41.743 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:42.004 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.264 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:42.264 15:28:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:42.264 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:42.264 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:42.264 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2141991 00:09:42.264 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2141991 00:09:42.525 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2141991 Killed "${NVMF_APP[@]}" "$@" 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2147201 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2147201 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2147201 ']' 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 [2024-11-03 15:28:20.096288] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:42.525 [2024-11-03 15:28:20.096344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.525 [2024-11-03 15:28:20.172522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.525 [2024-11-03 15:28:20.193163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.525 [2024-11-03 15:28:20.193199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.525 [2024-11-03 15:28:20.193208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.525 [2024-11-03 15:28:20.193217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.525 [2024-11-03 15:28:20.193224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.525 [2024-11-03 15:28:20.193769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.525 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.785 [2024-11-03 15:28:20.494127] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:42.785 [2024-11-03 15:28:20.494213] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:42.785 [2024-11-03 15:28:20.494239] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:42.785 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.045 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea64ac92-cb9c-42b2-9aca-c5746812a759 -t 2000 00:09:43.305 [ 00:09:43.305 { 00:09:43.306 "name": "ea64ac92-cb9c-42b2-9aca-c5746812a759", 00:09:43.306 "aliases": [ 00:09:43.306 "lvs/lvol" 00:09:43.306 ], 00:09:43.306 "product_name": "Logical Volume", 00:09:43.306 "block_size": 4096, 00:09:43.306 "num_blocks": 38912, 00:09:43.306 "uuid": "ea64ac92-cb9c-42b2-9aca-c5746812a759", 00:09:43.306 "assigned_rate_limits": { 00:09:43.306 "rw_ios_per_sec": 0, 00:09:43.306 "rw_mbytes_per_sec": 0, 00:09:43.306 "r_mbytes_per_sec": 0, 00:09:43.306 "w_mbytes_per_sec": 0 00:09:43.306 }, 00:09:43.306 "claimed": false, 00:09:43.306 "zoned": false, 00:09:43.306 "supported_io_types": { 00:09:43.306 "read": true, 00:09:43.306 "write": true, 00:09:43.306 "unmap": true, 00:09:43.306 "flush": false, 00:09:43.306 "reset": true, 00:09:43.306 "nvme_admin": false, 00:09:43.306 "nvme_io": false, 00:09:43.306 "nvme_io_md": false, 00:09:43.306 "write_zeroes": true, 00:09:43.306 "zcopy": false, 00:09:43.306 "get_zone_info": false, 00:09:43.306 "zone_management": false, 00:09:43.306 "zone_append": false, 00:09:43.306 "compare": false, 00:09:43.306 "compare_and_write": false, 00:09:43.306 "abort": false, 00:09:43.306 "seek_hole": true, 00:09:43.306 "seek_data": true, 00:09:43.306 "copy": false, 00:09:43.306 "nvme_iov_md": false 00:09:43.306 }, 00:09:43.306 "driver_specific": { 00:09:43.306 "lvol": { 00:09:43.306 "lvol_store_uuid": "30a4fc0f-c32e-4c4e-9b01-e0a041bd5700", 00:09:43.306 "base_bdev": "aio_bdev", 00:09:43.306 "thin_provision": false, 00:09:43.306 "num_allocated_clusters": 38, 00:09:43.306 "snapshot": false, 00:09:43.306 "clone": false, 00:09:43.306 "esnap_clone": false 00:09:43.306 } 00:09:43.306 } 00:09:43.306 } 00:09:43.306 ] 00:09:43.306 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:43.306 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:43.306 15:28:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:43.306 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:43.306 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:43.306 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:43.566 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:43.566 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.827 [2024-11-03 15:28:21.418653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.827 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:44.088 request: 00:09:44.088 { 00:09:44.088 "uuid": "30a4fc0f-c32e-4c4e-9b01-e0a041bd5700", 00:09:44.088 "method": "bdev_lvol_get_lvstores", 00:09:44.088 "req_id": 1 00:09:44.088 } 00:09:44.088 Got JSON-RPC error response 00:09:44.088 response: 00:09:44.088 { 00:09:44.088 "code": -19, 00:09:44.088 "message": "No such device" 00:09:44.088 } 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.088 aio_bdev 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.088 15:28:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.348 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea64ac92-cb9c-42b2-9aca-c5746812a759 -t 2000 00:09:44.609 [ 00:09:44.609 { 00:09:44.609 "name": "ea64ac92-cb9c-42b2-9aca-c5746812a759", 00:09:44.609 "aliases": [ 00:09:44.609 "lvs/lvol" 00:09:44.609 ], 00:09:44.609 "product_name": "Logical Volume", 00:09:44.609 "block_size": 4096, 00:09:44.609 "num_blocks": 38912, 00:09:44.609 "uuid": "ea64ac92-cb9c-42b2-9aca-c5746812a759", 00:09:44.609 "assigned_rate_limits": { 00:09:44.609 "rw_ios_per_sec": 0, 00:09:44.609 "rw_mbytes_per_sec": 0, 00:09:44.609 "r_mbytes_per_sec": 0, 00:09:44.609 "w_mbytes_per_sec": 0 00:09:44.609 }, 00:09:44.609 "claimed": false, 00:09:44.609 "zoned": false, 00:09:44.609 "supported_io_types": { 00:09:44.609 "read": true, 00:09:44.609 "write": true, 00:09:44.609 "unmap": true, 00:09:44.609 "flush": false, 00:09:44.609 "reset": true, 00:09:44.609 "nvme_admin": false, 00:09:44.609 "nvme_io": false, 00:09:44.609 "nvme_io_md": false, 00:09:44.609 "write_zeroes": true, 00:09:44.609 "zcopy": false, 00:09:44.609 "get_zone_info": false, 00:09:44.609 "zone_management": false, 00:09:44.609 "zone_append": false, 00:09:44.609 "compare": false, 00:09:44.609 "compare_and_write": false, 00:09:44.609 "abort": false, 00:09:44.609 "seek_hole": true, 00:09:44.609 "seek_data": true, 00:09:44.609 "copy": false, 00:09:44.609 "nvme_iov_md": false 00:09:44.609 }, 00:09:44.609 "driver_specific": { 00:09:44.609 "lvol": { 00:09:44.609 "lvol_store_uuid": "30a4fc0f-c32e-4c4e-9b01-e0a041bd5700", 00:09:44.609 "base_bdev": "aio_bdev", 00:09:44.609 "thin_provision": false, 00:09:44.609 "num_allocated_clusters": 38, 00:09:44.609 "snapshot": false, 00:09:44.609 "clone": false, 00:09:44.609 "esnap_clone": false 00:09:44.609 } 00:09:44.609 } 00:09:44.609 } 00:09:44.609 ] 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:44.609 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.869 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.870 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea64ac92-cb9c-42b2-9aca-c5746812a759 00:09:45.130 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30a4fc0f-c32e-4c4e-9b01-e0a041bd5700 00:09:45.390 15:28:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.390 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.390 00:09:45.390 real 0m16.621s 00:09:45.390 user 0m43.648s 00:09:45.390 sys 0m3.261s 00:09:45.390 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.390 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.390 ************************************ 00:09:45.390 END TEST lvs_grow_dirty 00:09:45.390 ************************************ 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.650 nvmf_trace.0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:45.650 rmmod nvme_rdma 00:09:45.650 rmmod nvme_fabrics 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2147201 ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2147201 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2147201 ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2147201 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2147201 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2147201' 00:09:45.650 killing process with pid 2147201 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2147201 00:09:45.650 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2147201 00:09:45.910 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.910 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:45.910 00:09:45.910 real 0m39.551s 00:09:45.910 user 1m4.306s 00:09:45.910 sys 0m9.860s 00:09:45.910 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.910 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.911 ************************************ 00:09:45.911 END TEST nvmf_lvs_grow 00:09:45.911 ************************************ 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.911 ************************************ 00:09:45.911 START TEST nvmf_bdev_io_wait 00:09:45.911 ************************************ 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:45.911 * Looking for test storage... 00:09:45.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:45.911 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.170 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.171 --rc genhtml_branch_coverage=1 00:09:46.171 --rc genhtml_function_coverage=1 00:09:46.171 --rc genhtml_legend=1 00:09:46.171 --rc geninfo_all_blocks=1 00:09:46.171 --rc geninfo_unexecuted_blocks=1 00:09:46.171 00:09:46.171 ' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.171 --rc genhtml_branch_coverage=1 00:09:46.171 --rc genhtml_function_coverage=1 00:09:46.171 --rc genhtml_legend=1 00:09:46.171 --rc geninfo_all_blocks=1 00:09:46.171 --rc geninfo_unexecuted_blocks=1 00:09:46.171 00:09:46.171 ' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.171 --rc genhtml_branch_coverage=1 00:09:46.171 --rc genhtml_function_coverage=1 00:09:46.171 --rc genhtml_legend=1 00:09:46.171 --rc geninfo_all_blocks=1 00:09:46.171 --rc geninfo_unexecuted_blocks=1 00:09:46.171 00:09:46.171 ' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.171 --rc genhtml_branch_coverage=1 00:09:46.171 --rc genhtml_function_coverage=1 00:09:46.171 --rc genhtml_legend=1 00:09:46.171 --rc geninfo_all_blocks=1 00:09:46.171 --rc geninfo_unexecuted_blocks=1 00:09:46.171 00:09:46.171 ' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.171 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.171 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.172 15:28:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.752 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.752 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.752 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.752 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.752 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:52.753 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:52.753 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:52.753 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:52.753 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:52.753 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:52.754 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:52.754 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:52.754 altname enp217s0f0np0 00:09:52.754 altname ens818f0np0 00:09:52.754 inet 192.168.100.8/24 scope global mlx_0_0 00:09:52.754 valid_lft forever preferred_lft forever 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:52.754 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:52.754 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:52.754 altname enp217s0f1np1 00:09:52.754 altname ens818f1np1 00:09:52.754 inet 192.168.100.9/24 scope global mlx_0_1 00:09:52.754 valid_lft forever preferred_lft forever 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:52.754 192.168.100.9' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:52.754 192.168.100.9' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:52.754 192.168.100.9' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2151184 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2151184 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2151184 ']' 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.754 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.755 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.755 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 [2024-11-03 15:28:30.581906] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.013 [2024-11-03 15:28:30.581987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.013 [2024-11-03 15:28:30.662257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.013 [2024-11-03 15:28:30.686958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.013 [2024-11-03 15:28:30.687005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.013 [2024-11-03 15:28:30.687015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.013 [2024-11-03 15:28:30.687024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.013 [2024-11-03 15:28:30.687032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.013 [2024-11-03 15:28:30.688672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.013 [2024-11-03 15:28:30.688773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.013 [2024-11-03 15:28:30.688857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.013 [2024-11-03 15:28:30.688859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.013 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.013 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:53.013 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.013 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.013 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.014 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.273 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.273 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:53.273 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.273 15:28:30 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.273 [2024-11-03 15:28:30.878161] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x138bba0/0x1390050) succeed. 00:09:53.273 [2024-11-03 15:28:30.887805] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x138d1e0/0x13d16f0) succeed. 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.273 Malloc0 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.273 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.533 [2024-11-03 15:28:31.068270] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2151256 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2151258 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.533 { 00:09:53.533 "params": { 00:09:53.533 "name": "Nvme$subsystem", 00:09:53.533 "trtype": "$TEST_TRANSPORT", 00:09:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.533 "adrfam": "ipv4", 00:09:53.533 "trsvcid": "$NVMF_PORT", 00:09:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.533 "hdgst": ${hdgst:-false}, 00:09:53.533 "ddgst": ${ddgst:-false} 00:09:53.533 }, 00:09:53.533 "method": "bdev_nvme_attach_controller" 00:09:53.533 } 00:09:53.533 EOF 00:09:53.533 )") 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2151260 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.533 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.533 { 00:09:53.533 "params": { 00:09:53.533 "name": "Nvme$subsystem", 00:09:53.533 "trtype": "$TEST_TRANSPORT", 00:09:53.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.533 "adrfam": "ipv4", 00:09:53.533 "trsvcid": "$NVMF_PORT", 00:09:53.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.533 "hdgst": ${hdgst:-false}, 00:09:53.533 "ddgst": ${ddgst:-false} 00:09:53.533 }, 00:09:53.533 "method": "bdev_nvme_attach_controller" 00:09:53.533 } 00:09:53.533 EOF 00:09:53.533 )") 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2151263 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.534 { 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme$subsystem", 00:09:53.534 "trtype": "$TEST_TRANSPORT", 00:09:53.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "$NVMF_PORT", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.534 "hdgst": ${hdgst:-false}, 00:09:53.534 "ddgst": ${ddgst:-false} 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 } 00:09:53.534 EOF 00:09:53.534 )") 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.534 { 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme$subsystem", 00:09:53.534 "trtype": "$TEST_TRANSPORT", 00:09:53.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "$NVMF_PORT", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.534 "hdgst": ${hdgst:-false}, 00:09:53.534 "ddgst": ${ddgst:-false} 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 } 00:09:53.534 EOF 00:09:53.534 )") 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2151256 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme1", 00:09:53.534 "trtype": "rdma", 00:09:53.534 "traddr": "192.168.100.8", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "4420", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.534 "hdgst": false, 00:09:53.534 "ddgst": false 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 }' 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme1", 00:09:53.534 "trtype": "rdma", 00:09:53.534 "traddr": "192.168.100.8", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "4420", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.534 "hdgst": false, 00:09:53.534 "ddgst": false 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 }' 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme1", 00:09:53.534 "trtype": "rdma", 00:09:53.534 "traddr": "192.168.100.8", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "4420", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.534 "hdgst": false, 00:09:53.534 "ddgst": false 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 }' 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.534 15:28:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.534 "params": { 00:09:53.534 "name": "Nvme1", 00:09:53.534 "trtype": "rdma", 00:09:53.534 "traddr": "192.168.100.8", 00:09:53.534 "adrfam": "ipv4", 00:09:53.534 "trsvcid": "4420", 00:09:53.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.534 "hdgst": false, 00:09:53.534 "ddgst": false 00:09:53.534 }, 00:09:53.534 "method": "bdev_nvme_attach_controller" 00:09:53.534 }' 00:09:53.534 [2024-11-03 15:28:31.120951] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.534 [2024-11-03 15:28:31.121011] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:53.534 [2024-11-03 15:28:31.122741] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.534 [2024-11-03 15:28:31.122793] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:53.534 [2024-11-03 15:28:31.122886] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.534 [2024-11-03 15:28:31.122931] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.534 [2024-11-03 15:28:31.124439] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.534 [2024-11-03 15:28:31.124486] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:53.534 [2024-11-03 15:28:31.320575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.794 [2024-11-03 15:28:31.336146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.794 [2024-11-03 15:28:31.420799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.794 [2024-11-03 15:28:31.458371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.794 [2024-11-03 15:28:31.482864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.794 [2024-11-03 15:28:31.497997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.794 [2024-11-03 15:28:31.539164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.794 [2024-11-03 15:28:31.554487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:54.055 Running I/O for 1 seconds... 00:09:54.055 Running I/O for 1 seconds... 00:09:54.055 Running I/O for 1 seconds... 00:09:54.055 Running I/O for 1 seconds... 00:09:54.993 16430.00 IOPS, 64.18 MiB/s [2024-11-03T14:28:32.783Z] 263528.00 IOPS, 1029.41 MiB/s [2024-11-03T14:28:32.783Z] 15305.00 IOPS, 59.79 MiB/s 00:09:54.993 Latency(us) 00:09:54.993 [2024-11-03T14:28:32.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.993 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:54.993 Nvme1n1 : 1.00 263140.24 1027.89 0.00 0.00 483.51 211.35 1939.87 00:09:54.993 [2024-11-03T14:28:32.783Z] =================================================================================================================== 00:09:54.993 [2024-11-03T14:28:32.783Z] Total : 263140.24 1027.89 0.00 0.00 483.51 211.35 1939.87 00:09:54.993 00:09:54.993 Latency(us) 00:09:54.993 [2024-11-03T14:28:32.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.993 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:54.994 Nvme1n1 : 1.01 16470.20 64.34 0.00 0.00 7746.88 4797.24 17406.36 00:09:54.994 [2024-11-03T14:28:32.784Z] =================================================================================================================== 00:09:54.994 [2024-11-03T14:28:32.784Z] Total : 16470.20 64.34 0.00 0.00 7746.88 4797.24 17406.36 00:09:54.994 00:09:54.994 Latency(us) 00:09:54.994 [2024-11-03T14:28:32.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.994 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:54.994 Nvme1n1 : 1.01 15346.92 59.95 0.00 0.00 8313.25 5242.88 16777.22 00:09:54.994 [2024-11-03T14:28:32.784Z] =================================================================================================================== 00:09:54.994 [2024-11-03T14:28:32.784Z] Total : 15346.92 59.95 0.00 0.00 8313.25 5242.88 16777.22 00:09:54.994 18092.00 IOPS, 70.67 MiB/s 00:09:54.994 Latency(us) 00:09:54.994 [2024-11-03T14:28:32.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.994 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:54.994 Nvme1n1 : 1.01 18187.15 71.04 0.00 0.00 7022.94 2634.55 14889.78 00:09:54.994 [2024-11-03T14:28:32.784Z] =================================================================================================================== 00:09:54.994 [2024-11-03T14:28:32.784Z] Total : 18187.15 71.04 0.00 0.00 7022.94 2634.55 14889.78 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2151258 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2151260 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2151263 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:55.254 rmmod nvme_rdma 00:09:55.254 rmmod nvme_fabrics 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2151184 ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2151184 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2151184 ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2151184 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2151184 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2151184' 00:09:55.254 killing process with pid 2151184 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2151184 00:09:55.254 15:28:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2151184 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:55.514 00:09:55.514 real 0m9.617s 00:09:55.514 user 0m16.918s 00:09:55.514 sys 0m6.490s 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.514 ************************************ 00:09:55.514 END TEST nvmf_bdev_io_wait 00:09:55.514 ************************************ 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.514 ************************************ 00:09:55.514 START TEST nvmf_queue_depth 00:09:55.514 ************************************ 00:09:55.514 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:55.773 * Looking for test storage... 00:09:55.773 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.773 --rc genhtml_branch_coverage=1 00:09:55.773 --rc genhtml_function_coverage=1 00:09:55.773 --rc genhtml_legend=1 00:09:55.773 --rc geninfo_all_blocks=1 00:09:55.773 --rc geninfo_unexecuted_blocks=1 00:09:55.773 00:09:55.773 ' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.773 --rc genhtml_branch_coverage=1 00:09:55.773 --rc genhtml_function_coverage=1 00:09:55.773 --rc genhtml_legend=1 00:09:55.773 --rc geninfo_all_blocks=1 00:09:55.773 --rc geninfo_unexecuted_blocks=1 00:09:55.773 00:09:55.773 ' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.773 --rc genhtml_branch_coverage=1 00:09:55.773 --rc genhtml_function_coverage=1 00:09:55.773 --rc genhtml_legend=1 00:09:55.773 --rc geninfo_all_blocks=1 00:09:55.773 --rc geninfo_unexecuted_blocks=1 00:09:55.773 00:09:55.773 ' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.773 --rc genhtml_branch_coverage=1 00:09:55.773 --rc genhtml_function_coverage=1 00:09:55.773 --rc genhtml_legend=1 00:09:55.773 --rc geninfo_all_blocks=1 00:09:55.773 --rc geninfo_unexecuted_blocks=1 00:09:55.773 00:09:55.773 ' 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.773 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.774 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.774 15:28:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:03.903 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:03.903 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:03.903 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:03.903 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:03.903 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:03.904 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.904 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:03.904 altname enp217s0f0np0 00:10:03.904 altname ens818f0np0 00:10:03.904 inet 192.168.100.8/24 scope global mlx_0_0 00:10:03.904 valid_lft forever preferred_lft forever 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:03.904 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.904 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:03.904 altname enp217s0f1np1 00:10:03.904 altname ens818f1np1 00:10:03.904 inet 192.168.100.9/24 scope global mlx_0_1 00:10:03.904 valid_lft forever preferred_lft forever 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:03.904 192.168.100.9' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:03.904 192.168.100.9' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:03.904 192.168.100.9' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2154990 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2154990 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2154990 ']' 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.904 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 [2024-11-03 15:28:40.598128] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:03.905 [2024-11-03 15:28:40.598181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.905 [2024-11-03 15:28:40.679802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.905 [2024-11-03 15:28:40.700521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.905 [2024-11-03 15:28:40.700558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.905 [2024-11-03 15:28:40.700567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.905 [2024-11-03 15:28:40.700576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.905 [2024-11-03 15:28:40.700582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.905 [2024-11-03 15:28:40.701184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 [2024-11-03 15:28:40.854973] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x131bf50/0x1320400) succeed. 00:10:03.905 [2024-11-03 15:28:40.864027] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x131d3b0/0x1361aa0) succeed. 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 Malloc0 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 [2024-11-03 15:28:40.961159] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2155060 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2155060 /var/tmp/bdevperf.sock 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2155060 ']' 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.905 15:28:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 [2024-11-03 15:28:41.010767] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:03.905 [2024-11-03 15:28:41.010814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155060 ] 00:10:03.905 [2024-11-03 15:28:41.088273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.905 [2024-11-03 15:28:41.111109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.905 NVMe0n1 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.905 15:28:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.905 Running I/O for 10 seconds... 00:10:05.780 17408.00 IOPS, 68.00 MiB/s [2024-11-03T14:28:44.507Z] 17822.00 IOPS, 69.62 MiB/s [2024-11-03T14:28:45.444Z] 17890.33 IOPS, 69.88 MiB/s [2024-11-03T14:28:46.824Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-03T14:28:47.761Z] 18020.20 IOPS, 70.39 MiB/s [2024-11-03T14:28:48.700Z] 18017.50 IOPS, 70.38 MiB/s [2024-11-03T14:28:49.639Z] 17997.29 IOPS, 70.30 MiB/s [2024-11-03T14:28:50.577Z] 18048.00 IOPS, 70.50 MiB/s [2024-11-03T14:28:51.515Z] 18008.11 IOPS, 70.34 MiB/s [2024-11-03T14:28:51.515Z] 18022.40 IOPS, 70.40 MiB/s 00:10:13.725 Latency(us) 00:10:13.725 [2024-11-03T14:28:51.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.725 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:13.725 Verification LBA range: start 0x0 length 0x4000 00:10:13.725 NVMe0n1 : 10.04 18047.33 70.50 0.00 0.00 56603.39 22229.81 35651.58 00:10:13.725 [2024-11-03T14:28:51.515Z] =================================================================================================================== 00:10:13.725 [2024-11-03T14:28:51.515Z] Total : 18047.33 70.50 0.00 0.00 56603.39 22229.81 35651.58 00:10:13.725 { 00:10:13.725 "results": [ 00:10:13.725 { 00:10:13.725 "job": "NVMe0n1", 00:10:13.725 "core_mask": "0x1", 00:10:13.725 "workload": "verify", 00:10:13.725 "status": "finished", 00:10:13.725 "verify_range": { 00:10:13.725 "start": 0, 00:10:13.725 "length": 16384 00:10:13.725 }, 00:10:13.725 "queue_depth": 1024, 00:10:13.725 "io_size": 4096, 00:10:13.725 "runtime": 10.042927, 00:10:13.725 "iops": 18047.328234089524, 00:10:13.725 "mibps": 70.4973759144122, 00:10:13.725 "io_failed": 0, 00:10:13.725 "io_timeout": 0, 00:10:13.725 "avg_latency_us": 56603.385057627114, 00:10:13.725 "min_latency_us": 22229.8112, 00:10:13.725 "max_latency_us": 35651.584 00:10:13.725 } 00:10:13.725 ], 00:10:13.725 "core_count": 1 00:10:13.725 } 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2155060 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2155060 ']' 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2155060 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.725 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2155060 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2155060' 00:10:13.985 killing process with pid 2155060 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2155060 00:10:13.985 Received shutdown signal, test time was about 10.000000 seconds 00:10:13.985 00:10:13.985 Latency(us) 00:10:13.985 [2024-11-03T14:28:51.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.985 [2024-11-03T14:28:51.775Z] =================================================================================================================== 00:10:13.985 [2024-11-03T14:28:51.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2155060 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:13.985 rmmod nvme_rdma 00:10:13.985 rmmod nvme_fabrics 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2154990 ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2154990 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2154990 ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2154990 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.985 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2154990 00:10:14.245 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:14.245 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:14.245 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2154990' 00:10:14.245 killing process with pid 2154990 00:10:14.245 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2154990 00:10:14.245 15:28:51 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2154990 00:10:14.245 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.245 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:14.245 00:10:14.245 real 0m18.746s 00:10:14.245 user 0m24.200s 00:10:14.245 sys 0m6.053s 00:10:14.245 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.245 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.245 ************************************ 00:10:14.245 END TEST nvmf_queue_depth 00:10:14.245 ************************************ 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.504 ************************************ 00:10:14.504 START TEST nvmf_target_multipath 00:10:14.504 ************************************ 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:14.504 * Looking for test storage... 00:10:14.504 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:14.504 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:14.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.764 --rc genhtml_branch_coverage=1 00:10:14.764 --rc genhtml_function_coverage=1 00:10:14.764 --rc genhtml_legend=1 00:10:14.764 --rc geninfo_all_blocks=1 00:10:14.764 --rc geninfo_unexecuted_blocks=1 00:10:14.764 00:10:14.764 ' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:14.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.764 --rc genhtml_branch_coverage=1 00:10:14.764 --rc genhtml_function_coverage=1 00:10:14.764 --rc genhtml_legend=1 00:10:14.764 --rc geninfo_all_blocks=1 00:10:14.764 --rc geninfo_unexecuted_blocks=1 00:10:14.764 00:10:14.764 ' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:14.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.764 --rc genhtml_branch_coverage=1 00:10:14.764 --rc genhtml_function_coverage=1 00:10:14.764 --rc genhtml_legend=1 00:10:14.764 --rc geninfo_all_blocks=1 00:10:14.764 --rc geninfo_unexecuted_blocks=1 00:10:14.764 00:10:14.764 ' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:14.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.764 --rc genhtml_branch_coverage=1 00:10:14.764 --rc genhtml_function_coverage=1 00:10:14.764 --rc genhtml_legend=1 00:10:14.764 --rc geninfo_all_blocks=1 00:10:14.764 --rc geninfo_unexecuted_blocks=1 00:10:14.764 00:10:14.764 ' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.764 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.765 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.765 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.765 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.765 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.765 15:28:52 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:22.888 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:22.888 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.888 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:22.889 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:22.889 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:22.889 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.889 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:22.889 altname enp217s0f0np0 00:10:22.889 altname ens818f0np0 00:10:22.889 inet 192.168.100.8/24 scope global mlx_0_0 00:10:22.889 valid_lft forever preferred_lft forever 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:22.889 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.889 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:22.889 altname enp217s0f1np1 00:10:22.889 altname ens818f1np1 00:10:22.889 inet 192.168.100.9/24 scope global mlx_0_1 00:10:22.889 valid_lft forever preferred_lft forever 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.889 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:22.890 192.168.100.9' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:22.890 192.168.100.9' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:22.890 192.168.100.9' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:22.890 run this test only with TCP transport for now 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:22.890 rmmod nvme_rdma 00:10:22.890 rmmod nvme_fabrics 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:22.890 00:10:22.890 real 0m7.428s 00:10:22.890 user 0m2.145s 00:10:22.890 sys 0m5.485s 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.890 ************************************ 00:10:22.890 END TEST nvmf_target_multipath 00:10:22.890 ************************************ 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.890 ************************************ 00:10:22.890 START TEST nvmf_zcopy 00:10:22.890 ************************************ 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:22.890 * Looking for test storage... 00:10:22.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.890 --rc genhtml_branch_coverage=1 00:10:22.890 --rc genhtml_function_coverage=1 00:10:22.890 --rc genhtml_legend=1 00:10:22.890 --rc geninfo_all_blocks=1 00:10:22.890 --rc geninfo_unexecuted_blocks=1 00:10:22.890 00:10:22.890 ' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.890 --rc genhtml_branch_coverage=1 00:10:22.890 --rc genhtml_function_coverage=1 00:10:22.890 --rc genhtml_legend=1 00:10:22.890 --rc geninfo_all_blocks=1 00:10:22.890 --rc geninfo_unexecuted_blocks=1 00:10:22.890 00:10:22.890 ' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.890 --rc genhtml_branch_coverage=1 00:10:22.890 --rc genhtml_function_coverage=1 00:10:22.890 --rc genhtml_legend=1 00:10:22.890 --rc geninfo_all_blocks=1 00:10:22.890 --rc geninfo_unexecuted_blocks=1 00:10:22.890 00:10:22.890 ' 00:10:22.890 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.891 --rc genhtml_branch_coverage=1 00:10:22.891 --rc genhtml_function_coverage=1 00:10:22.891 --rc genhtml_legend=1 00:10:22.891 --rc geninfo_all_blocks=1 00:10:22.891 --rc geninfo_unexecuted_blocks=1 00:10:22.891 00:10:22.891 ' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.891 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.891 15:28:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:29.552 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:29.552 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:29.552 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:29.552 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.552 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:29.553 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.553 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:29.553 altname enp217s0f0np0 00:10:29.553 altname ens818f0np0 00:10:29.553 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.553 valid_lft forever preferred_lft forever 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:29.553 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.553 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:29.553 altname enp217s0f1np1 00:10:29.553 altname ens818f1np1 00:10:29.553 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.553 valid_lft forever preferred_lft forever 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.553 192.168.100.9' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:29.553 192.168.100.9' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:29.553 192.168.100.9' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2163745 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2163745 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2163745 ']' 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.553 15:29:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.553 [2024-11-03 15:29:06.917595] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:29.553 [2024-11-03 15:29:06.917661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.553 [2024-11-03 15:29:06.993839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.553 [2024-11-03 15:29:07.014653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.553 [2024-11-03 15:29:07.014686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.553 [2024-11-03 15:29:07.014695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.553 [2024-11-03 15:29:07.014703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.553 [2024-11-03 15:29:07.014710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.553 [2024-11-03 15:29:07.015320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:29.553 Unsupported transport: rdma 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # type=--id 00:10:29.553 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@811 -- # id=0 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@822 -- # for n in $shm_files 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:29.554 nvmf_trace.0 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # return 0 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:29.554 rmmod nvme_rdma 00:10:29.554 rmmod nvme_fabrics 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2163745 ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2163745 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2163745 ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2163745 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2163745 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2163745' 00:10:29.554 killing process with pid 2163745 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2163745 00:10:29.554 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2163745 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:29.814 00:10:29.814 real 0m7.862s 00:10:29.814 user 0m2.886s 00:10:29.814 sys 0m5.607s 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.814 ************************************ 00:10:29.814 END TEST nvmf_zcopy 00:10:29.814 ************************************ 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.814 ************************************ 00:10:29.814 START TEST nvmf_nmic 00:10:29.814 ************************************ 00:10:29.814 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:30.074 * Looking for test storage... 00:10:30.074 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.074 --rc genhtml_branch_coverage=1 00:10:30.074 --rc genhtml_function_coverage=1 00:10:30.074 --rc genhtml_legend=1 00:10:30.074 --rc geninfo_all_blocks=1 00:10:30.074 --rc geninfo_unexecuted_blocks=1 00:10:30.074 00:10:30.074 ' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.074 --rc genhtml_branch_coverage=1 00:10:30.074 --rc genhtml_function_coverage=1 00:10:30.074 --rc genhtml_legend=1 00:10:30.074 --rc geninfo_all_blocks=1 00:10:30.074 --rc geninfo_unexecuted_blocks=1 00:10:30.074 00:10:30.074 ' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.074 --rc genhtml_branch_coverage=1 00:10:30.074 --rc genhtml_function_coverage=1 00:10:30.074 --rc genhtml_legend=1 00:10:30.074 --rc geninfo_all_blocks=1 00:10:30.074 --rc geninfo_unexecuted_blocks=1 00:10:30.074 00:10:30.074 ' 00:10:30.074 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.075 --rc genhtml_branch_coverage=1 00:10:30.075 --rc genhtml_function_coverage=1 00:10:30.075 --rc genhtml_legend=1 00:10:30.075 --rc geninfo_all_blocks=1 00:10:30.075 --rc geninfo_unexecuted_blocks=1 00:10:30.075 00:10:30.075 ' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.075 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.075 15:29:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.647 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:36.648 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:36.648 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:36.648 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:36.648 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.648 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:36.908 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.908 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:36.908 altname enp217s0f0np0 00:10:36.908 altname ens818f0np0 00:10:36.908 inet 192.168.100.8/24 scope global mlx_0_0 00:10:36.908 valid_lft forever preferred_lft forever 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:36.908 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.908 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:36.908 altname enp217s0f1np1 00:10:36.908 altname ens818f1np1 00:10:36.908 inet 192.168.100.9/24 scope global mlx_0_1 00:10:36.908 valid_lft forever preferred_lft forever 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.908 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:36.909 192.168.100.9' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:36.909 192.168.100.9' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:36.909 192.168.100.9' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2167190 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2167190 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2167190 ']' 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.909 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 [2024-11-03 15:29:14.641158] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:36.909 [2024-11-03 15:29:14.641214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.169 [2024-11-03 15:29:14.720227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.169 [2024-11-03 15:29:14.744222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.169 [2024-11-03 15:29:14.744266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.169 [2024-11-03 15:29:14.744275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.169 [2024-11-03 15:29:14.744283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.169 [2024-11-03 15:29:14.744289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.169 [2024-11-03 15:29:14.745905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.169 [2024-11-03 15:29:14.746029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.169 [2024-11-03 15:29:14.746061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.169 [2024-11-03 15:29:14.746059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.169 15:29:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.169 [2024-11-03 15:29:14.914625] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x217fc50/0x2184100) succeed. 00:10:37.169 [2024-11-03 15:29:14.923857] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2181290/0x21c57a0) succeed. 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 Malloc0 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 [2024-11-03 15:29:15.103052] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:37.429 test case1: single bdev can't be used in multiple subsystems 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 [2024-11-03 15:29:15.130824] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:37.429 [2024-11-03 15:29:15.130845] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:37.429 [2024-11-03 15:29:15.130854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.429 request: 00:10:37.429 { 00:10:37.429 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.429 "namespace": { 00:10:37.429 "bdev_name": "Malloc0", 00:10:37.429 "no_auto_visible": false 00:10:37.429 }, 00:10:37.429 "method": "nvmf_subsystem_add_ns", 00:10:37.429 "req_id": 1 00:10:37.429 } 00:10:37.429 Got JSON-RPC error response 00:10:37.429 response: 00:10:37.429 { 00:10:37.429 "code": -32602, 00:10:37.429 "message": "Invalid parameters" 00:10:37.429 } 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:37.429 Adding namespace failed - expected result. 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:37.429 test case2: host connect to nvmf target in multiple paths 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 [2024-11-03 15:29:15.146923] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 15:29:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.807 15:29:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:39.375 15:29:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.375 15:29:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:39.375 15:29:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.375 15:29:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:39.375 15:29:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:41.937 15:29:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:41.937 [global] 00:10:41.937 thread=1 00:10:41.937 invalidate=1 00:10:41.937 rw=write 00:10:41.937 time_based=1 00:10:41.937 runtime=1 00:10:41.937 ioengine=libaio 00:10:41.937 direct=1 00:10:41.937 bs=4096 00:10:41.937 iodepth=1 00:10:41.937 norandommap=0 00:10:41.937 numjobs=1 00:10:41.937 00:10:41.937 verify_dump=1 00:10:41.937 verify_backlog=512 00:10:41.937 verify_state_save=0 00:10:41.937 do_verify=1 00:10:41.937 verify=crc32c-intel 00:10:41.937 [job0] 00:10:41.937 filename=/dev/nvme0n1 00:10:41.937 Could not set queue depth (nvme0n1) 00:10:41.937 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.937 fio-3.35 00:10:41.937 Starting 1 thread 00:10:42.874 00:10:42.874 job0: (groupid=0, jobs=1): err= 0: pid=2168172: Sun Nov 3 15:29:20 2024 00:10:42.874 read: IOPS=6969, BW=27.2MiB/s (28.5MB/s)(27.2MiB/1001msec) 00:10:42.874 slat (nsec): min=8359, max=25878, avg=8989.59, stdev=810.54 00:10:42.874 clat (usec): min=41, max=151, avg=58.83, stdev= 3.85 00:10:42.874 lat (usec): min=57, max=160, avg=67.82, stdev= 3.89 00:10:42.874 clat percentiles (usec): 00:10:42.874 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:10:42.874 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:10:42.874 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 65], 00:10:42.874 | 99.00th=[ 69], 99.50th=[ 71], 99.90th=[ 83], 99.95th=[ 85], 00:10:42.874 | 99.99th=[ 153] 00:10:42.874 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:10:42.874 slat (nsec): min=10451, max=49993, avg=11436.32, stdev=1213.40 00:10:42.874 clat (usec): min=38, max=259, avg=56.95, stdev= 5.93 00:10:42.874 lat (usec): min=57, max=271, avg=68.38, stdev= 6.05 00:10:42.874 clat percentiles (usec): 00:10:42.874 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:10:42.874 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:10:42.874 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:10:42.874 | 99.00th=[ 68], 99.50th=[ 70], 99.90th=[ 131], 99.95th=[ 157], 00:10:42.874 | 99.99th=[ 260] 00:10:42.874 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:10:42.874 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:10:42.874 lat (usec) : 50=0.83%, 100=99.07%, 250=0.09%, 500=0.01% 00:10:42.874 cpu : usr=12.00%, sys=18.00%, ctx=14144, majf=0, minf=1 00:10:42.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.874 issued rwts: total=6976,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.874 00:10:42.874 Run status group 0 (all jobs): 00:10:42.874 READ: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=27.2MiB (28.6MB), run=1001-1001msec 00:10:42.874 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:42.874 00:10:42.874 Disk stats (read/write): 00:10:42.874 nvme0n1: ios=6193/6563, merge=0/0, ticks=313/297, in_queue=610, util=90.58% 00:10:42.874 15:29:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:44.780 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.780 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:44.780 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:44.780 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:45.039 rmmod nvme_rdma 00:10:45.039 rmmod nvme_fabrics 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2167190 ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2167190 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2167190 ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2167190 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2167190 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2167190' 00:10:45.039 killing process with pid 2167190 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2167190 00:10:45.039 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2167190 00:10:45.299 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.299 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:45.299 00:10:45.299 real 0m15.434s 00:10:45.299 user 0m42.889s 00:10:45.299 sys 0m6.094s 00:10:45.299 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.299 15:29:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.299 ************************************ 00:10:45.299 END TEST nvmf_nmic 00:10:45.299 ************************************ 00:10:45.299 15:29:23 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:45.299 15:29:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:45.299 15:29:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.299 15:29:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.299 ************************************ 00:10:45.299 START TEST nvmf_fio_target 00:10:45.299 ************************************ 00:10:45.299 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:45.559 * Looking for test storage... 00:10:45.559 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.559 --rc genhtml_branch_coverage=1 00:10:45.559 --rc genhtml_function_coverage=1 00:10:45.559 --rc genhtml_legend=1 00:10:45.559 --rc geninfo_all_blocks=1 00:10:45.559 --rc geninfo_unexecuted_blocks=1 00:10:45.559 00:10:45.559 ' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.559 --rc genhtml_branch_coverage=1 00:10:45.559 --rc genhtml_function_coverage=1 00:10:45.559 --rc genhtml_legend=1 00:10:45.559 --rc geninfo_all_blocks=1 00:10:45.559 --rc geninfo_unexecuted_blocks=1 00:10:45.559 00:10:45.559 ' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.559 --rc genhtml_branch_coverage=1 00:10:45.559 --rc genhtml_function_coverage=1 00:10:45.559 --rc genhtml_legend=1 00:10:45.559 --rc geninfo_all_blocks=1 00:10:45.559 --rc geninfo_unexecuted_blocks=1 00:10:45.559 00:10:45.559 ' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.559 --rc genhtml_branch_coverage=1 00:10:45.559 --rc genhtml_function_coverage=1 00:10:45.559 --rc genhtml_legend=1 00:10:45.559 --rc geninfo_all_blocks=1 00:10:45.559 --rc geninfo_unexecuted_blocks=1 00:10:45.559 00:10:45.559 ' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.559 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.560 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.560 15:29:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.133 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:52.134 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:52.134 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:52.134 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:52.134 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:52.134 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.134 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:52.134 altname enp217s0f0np0 00:10:52.134 altname ens818f0np0 00:10:52.134 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.134 valid_lft forever preferred_lft forever 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:52.134 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.134 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:52.134 altname enp217s0f1np1 00:10:52.134 altname ens818f1np1 00:10:52.134 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.134 valid_lft forever preferred_lft forever 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.134 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:52.134 192.168.100.9' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:52.135 192.168.100.9' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:52.135 192.168.100.9' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2172135 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2172135 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2172135 ']' 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.135 [2024-11-03 15:29:29.508757] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:52.135 [2024-11-03 15:29:29.508807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.135 [2024-11-03 15:29:29.588078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.135 [2024-11-03 15:29:29.609961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.135 [2024-11-03 15:29:29.610007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.135 [2024-11-03 15:29:29.610017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.135 [2024-11-03 15:29:29.610025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.135 [2024-11-03 15:29:29.610033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.135 [2024-11-03 15:29:29.611758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.135 [2024-11-03 15:29:29.611856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.135 [2024-11-03 15:29:29.611940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.135 [2024-11-03 15:29:29.611942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.135 15:29:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:52.393 [2024-11-03 15:29:29.946351] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11d7c50/0x11dc100) succeed. 00:10:52.393 [2024-11-03 15:29:29.955358] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11d9290/0x121d7a0) succeed. 00:10:52.393 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.652 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:52.652 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.911 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:52.911 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.171 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:53.171 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.430 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:53.430 15:29:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:53.430 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.689 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:53.689 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.948 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:53.948 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.207 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:54.207 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:54.207 15:29:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.466 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.466 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.725 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.725 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.984 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.984 [2024-11-03 15:29:32.743611] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.243 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:55.243 15:29:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:55.502 15:29:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:56.439 15:29:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:58.975 15:29:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.975 [global] 00:10:58.975 thread=1 00:10:58.975 invalidate=1 00:10:58.975 rw=write 00:10:58.975 time_based=1 00:10:58.975 runtime=1 00:10:58.975 ioengine=libaio 00:10:58.975 direct=1 00:10:58.975 bs=4096 00:10:58.975 iodepth=1 00:10:58.975 norandommap=0 00:10:58.975 numjobs=1 00:10:58.975 00:10:58.975 verify_dump=1 00:10:58.975 verify_backlog=512 00:10:58.975 verify_state_save=0 00:10:58.975 do_verify=1 00:10:58.975 verify=crc32c-intel 00:10:58.975 [job0] 00:10:58.975 filename=/dev/nvme0n1 00:10:58.975 [job1] 00:10:58.975 filename=/dev/nvme0n2 00:10:58.975 [job2] 00:10:58.975 filename=/dev/nvme0n3 00:10:58.975 [job3] 00:10:58.975 filename=/dev/nvme0n4 00:10:58.975 Could not set queue depth (nvme0n1) 00:10:58.975 Could not set queue depth (nvme0n2) 00:10:58.975 Could not set queue depth (nvme0n3) 00:10:58.975 Could not set queue depth (nvme0n4) 00:10:58.975 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.975 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.975 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.975 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.975 fio-3.35 00:10:58.975 Starting 4 threads 00:11:00.422 00:11:00.422 job0: (groupid=0, jobs=1): err= 0: pid=2173434: Sun Nov 3 15:29:37 2024 00:11:00.422 read: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.7MiB/1001msec) 00:11:00.422 slat (usec): min=8, max=116, avg= 9.23, stdev= 1.96 00:11:00.422 clat (usec): min=3, max=190, avg=113.67, stdev=21.03 00:11:00.422 lat (usec): min=77, max=200, avg=122.90, stdev=21.00 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 86], 00:11:00.422 | 30.00th=[ 109], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 125], 00:11:00.422 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:11:00.422 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 186], 00:11:00.422 | 99.99th=[ 192] 00:11:00.422 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:00.422 slat (nsec): min=10682, max=46633, avg=11593.85, stdev=1236.92 00:11:00.422 clat (usec): min=65, max=210, avg=106.89, stdev=21.32 00:11:00.422 lat (usec): min=76, max=221, avg=118.48, stdev=21.17 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 81], 00:11:00.422 | 30.00th=[ 94], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 118], 00:11:00.422 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 135], 00:11:00.422 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 178], 00:11:00.422 | 99.99th=[ 210] 00:11:00.422 bw ( KiB/s): min=18872, max=18872, per=25.19%, avg=18872.00, stdev= 0.00, samples=1 00:11:00.422 iops : min= 4718, max= 4718, avg=4718.00, stdev= 0.00, samples=1 00:11:00.422 lat (usec) : 4=0.01%, 100=29.42%, 250=70.57% 00:11:00.422 cpu : usr=6.00%, sys=10.60%, ctx=8118, majf=0, minf=1 00:11:00.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 issued rwts: total=4022,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.422 job1: (groupid=0, jobs=1): err= 0: pid=2173437: Sun Nov 3 15:29:37 2024 00:11:00.422 read: IOPS=5307, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1001msec) 00:11:00.422 slat (nsec): min=8336, max=30258, avg=8923.11, stdev=810.30 00:11:00.422 clat (usec): min=65, max=331, avg=80.34, stdev= 6.61 00:11:00.422 lat (usec): min=73, max=340, avg=89.26, stdev= 6.69 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 76], 00:11:00.422 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:11:00.422 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 91], 00:11:00.422 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 109], 99.95th=[ 115], 00:11:00.422 | 99.99th=[ 330] 00:11:00.422 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:11:00.422 slat (nsec): min=10734, max=63707, avg=11449.30, stdev=1243.38 00:11:00.422 clat (usec): min=63, max=109, avg=76.81, stdev= 5.47 00:11:00.422 lat (usec): min=74, max=159, avg=88.26, stdev= 5.67 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 73], 00:11:00.422 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:11:00.422 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:11:00.422 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 102], 00:11:00.422 | 99.99th=[ 110] 00:11:00.422 bw ( KiB/s): min=23272, max=23272, per=31.06%, avg=23272.00, stdev= 0.00, samples=1 00:11:00.422 iops : min= 5818, max= 5818, avg=5818.00, stdev= 0.00, samples=1 00:11:00.422 lat (usec) : 100=99.70%, 250=0.29%, 500=0.01% 00:11:00.422 cpu : usr=10.30%, sys=13.00%, ctx=10946, majf=0, minf=1 00:11:00.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 issued rwts: total=5313,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.422 job2: (groupid=0, jobs=1): err= 0: pid=2173438: Sun Nov 3 15:29:37 2024 00:11:00.422 read: IOPS=3827, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1001msec) 00:11:00.422 slat (nsec): min=8604, max=28622, avg=9801.10, stdev=2287.74 00:11:00.422 clat (usec): min=72, max=177, avg=114.93, stdev=17.56 00:11:00.422 lat (usec): min=81, max=186, avg=124.73, stdev=17.70 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 94], 00:11:00.422 | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 123], 00:11:00.422 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 139], 00:11:00.422 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 172], 99.95th=[ 178], 00:11:00.422 | 99.99th=[ 178] 00:11:00.422 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:00.422 slat (nsec): min=10514, max=48722, avg=12766.82, stdev=3029.17 00:11:00.422 clat (usec): min=69, max=180, avg=109.62, stdev=16.20 00:11:00.422 lat (usec): min=81, max=192, avg=122.39, stdev=16.47 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 91], 00:11:00.422 | 30.00th=[ 103], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 117], 00:11:00.422 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 128], 95.00th=[ 131], 00:11:00.422 | 99.00th=[ 143], 99.50th=[ 149], 99.90th=[ 163], 99.95th=[ 169], 00:11:00.422 | 99.99th=[ 180] 00:11:00.422 bw ( KiB/s): min=17344, max=17344, per=23.15%, avg=17344.00, stdev= 0.00, samples=1 00:11:00.422 iops : min= 4336, max= 4336, avg=4336.00, stdev= 0.00, samples=1 00:11:00.422 lat (usec) : 100=26.18%, 250=73.82% 00:11:00.422 cpu : usr=7.30%, sys=9.80%, ctx=7927, majf=0, minf=1 00:11:00.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.422 issued rwts: total=3831,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.422 job3: (groupid=0, jobs=1): err= 0: pid=2173441: Sun Nov 3 15:29:37 2024 00:11:00.422 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:00.422 slat (nsec): min=8549, max=33297, avg=9080.80, stdev=1045.30 00:11:00.422 clat (usec): min=72, max=136, avg=94.08, stdev= 7.03 00:11:00.422 lat (usec): min=84, max=145, avg=103.16, stdev= 7.09 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:11:00.422 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 95], 00:11:00.422 | 70.00th=[ 97], 80.00th=[ 100], 90.00th=[ 103], 95.00th=[ 108], 00:11:00.422 | 99.00th=[ 115], 99.50th=[ 119], 99.90th=[ 127], 99.95th=[ 131], 00:11:00.422 | 99.99th=[ 137] 00:11:00.422 write: IOPS=4921, BW=19.2MiB/s (20.2MB/s)(19.2MiB/1001msec); 0 zone resets 00:11:00.422 slat (nsec): min=10574, max=43454, avg=11599.92, stdev=1000.84 00:11:00.422 clat (usec): min=72, max=252, avg=89.92, stdev= 7.21 00:11:00.422 lat (usec): min=83, max=264, avg=101.52, stdev= 7.32 00:11:00.422 clat percentiles (usec): 00:11:00.422 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:11:00.422 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:11:00.422 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 102], 00:11:00.422 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 119], 99.95th=[ 120], 00:11:00.422 | 99.99th=[ 253] 00:11:00.423 bw ( KiB/s): min=20480, max=20480, per=27.33%, avg=20480.00, stdev= 0.00, samples=1 00:11:00.423 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:00.423 lat (usec) : 100=87.21%, 250=12.78%, 500=0.01% 00:11:00.423 cpu : usr=7.70%, sys=12.70%, ctx=9534, majf=0, minf=1 00:11:00.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.423 issued rwts: total=4608,4926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.423 00:11:00.423 Run status group 0 (all jobs): 00:11:00.423 READ: bw=69.4MiB/s (72.7MB/s), 14.9MiB/s-20.7MiB/s (15.7MB/s-21.7MB/s), io=69.4MiB (72.8MB), run=1001-1001msec 00:11:00.423 WRITE: bw=73.2MiB/s (76.7MB/s), 16.0MiB/s-22.0MiB/s (16.8MB/s-23.0MB/s), io=73.2MiB (76.8MB), run=1001-1001msec 00:11:00.423 00:11:00.423 Disk stats (read/write): 00:11:00.423 nvme0n1: ios=3340/3584, merge=0/0, ticks=345/351, in_queue=696, util=84.27% 00:11:00.423 nvme0n2: ios=4508/4608, merge=0/0, ticks=335/308, in_queue=643, util=85.19% 00:11:00.423 nvme0n3: ios=3100/3584, merge=0/0, ticks=324/357, in_queue=681, util=88.35% 00:11:00.423 nvme0n4: ios=3840/4096, merge=0/0, ticks=319/333, in_queue=652, util=89.48% 00:11:00.423 15:29:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:00.423 [global] 00:11:00.423 thread=1 00:11:00.423 invalidate=1 00:11:00.423 rw=randwrite 00:11:00.423 time_based=1 00:11:00.423 runtime=1 00:11:00.423 ioengine=libaio 00:11:00.423 direct=1 00:11:00.423 bs=4096 00:11:00.423 iodepth=1 00:11:00.423 norandommap=0 00:11:00.423 numjobs=1 00:11:00.423 00:11:00.423 verify_dump=1 00:11:00.423 verify_backlog=512 00:11:00.423 verify_state_save=0 00:11:00.423 do_verify=1 00:11:00.423 verify=crc32c-intel 00:11:00.423 [job0] 00:11:00.423 filename=/dev/nvme0n1 00:11:00.423 [job1] 00:11:00.423 filename=/dev/nvme0n2 00:11:00.423 [job2] 00:11:00.423 filename=/dev/nvme0n3 00:11:00.423 [job3] 00:11:00.423 filename=/dev/nvme0n4 00:11:00.423 Could not set queue depth (nvme0n1) 00:11:00.423 Could not set queue depth (nvme0n2) 00:11:00.423 Could not set queue depth (nvme0n3) 00:11:00.423 Could not set queue depth (nvme0n4) 00:11:00.423 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.423 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.423 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.423 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.423 fio-3.35 00:11:00.423 Starting 4 threads 00:11:01.801 00:11:01.801 job0: (groupid=0, jobs=1): err= 0: pid=2173862: Sun Nov 3 15:29:39 2024 00:11:01.801 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:11:01.801 slat (nsec): min=8300, max=32006, avg=8935.40, stdev=922.74 00:11:01.801 clat (usec): min=79, max=161, avg=109.88, stdev= 7.38 00:11:01.801 lat (usec): min=87, max=170, avg=118.81, stdev= 7.36 00:11:01.801 clat percentiles (usec): 00:11:01.801 | 1.00th=[ 92], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:11:01.801 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:11:01.801 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 119], 95.00th=[ 122], 00:11:01.801 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 151], 99.95th=[ 157], 00:11:01.801 | 99.99th=[ 161] 00:11:01.801 write: IOPS=4310, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1001msec); 0 zone resets 00:11:01.801 slat (nsec): min=10209, max=81350, avg=11182.24, stdev=1514.08 00:11:01.801 clat (usec): min=70, max=144, avg=102.95, stdev= 6.95 00:11:01.801 lat (usec): min=82, max=193, avg=114.13, stdev= 7.04 00:11:01.801 clat percentiles (usec): 00:11:01.801 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:11:01.801 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:11:01.801 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:11:01.801 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 139], 99.95th=[ 145], 00:11:01.801 | 99.99th=[ 145] 00:11:01.801 bw ( KiB/s): min=17216, max=17216, per=26.15%, avg=17216.00, stdev= 0.00, samples=1 00:11:01.801 iops : min= 4304, max= 4304, avg=4304.00, stdev= 0.00, samples=1 00:11:01.801 lat (usec) : 100=20.27%, 250=79.73% 00:11:01.801 cpu : usr=7.70%, sys=10.00%, ctx=8412, majf=0, minf=1 00:11:01.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 issued rwts: total=4096,4315,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.802 job1: (groupid=0, jobs=1): err= 0: pid=2173868: Sun Nov 3 15:29:39 2024 00:11:01.802 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:11:01.802 slat (nsec): min=8449, max=21869, avg=9146.24, stdev=791.97 00:11:01.802 clat (usec): min=78, max=160, avg=109.63, stdev= 7.16 00:11:01.802 lat (usec): min=87, max=169, avg=118.78, stdev= 7.14 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 93], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:11:01.802 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 112], 00:11:01.802 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 121], 00:11:01.802 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 143], 99.95th=[ 149], 00:11:01.802 | 99.99th=[ 161] 00:11:01.802 write: IOPS=4309, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1001msec); 0 zone resets 00:11:01.802 slat (nsec): min=10454, max=43050, avg=11259.96, stdev=1102.04 00:11:01.802 clat (usec): min=68, max=141, avg=102.94, stdev= 6.77 00:11:01.802 lat (usec): min=79, max=153, avg=114.20, stdev= 6.82 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 98], 00:11:01.802 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:11:01.802 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:11:01.802 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 139], 99.95th=[ 141], 00:11:01.802 | 99.99th=[ 143] 00:11:01.802 bw ( KiB/s): min=17200, max=17200, per=26.13%, avg=17200.00, stdev= 0.00, samples=1 00:11:01.802 iops : min= 4300, max= 4300, avg=4300.00, stdev= 0.00, samples=1 00:11:01.802 lat (usec) : 100=19.69%, 250=80.31% 00:11:01.802 cpu : usr=8.20%, sys=9.70%, ctx=8411, majf=0, minf=1 00:11:01.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 issued rwts: total=4096,4314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.802 job2: (groupid=0, jobs=1): err= 0: pid=2173880: Sun Nov 3 15:29:39 2024 00:11:01.802 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:01.802 slat (nsec): min=8589, max=32622, avg=9221.37, stdev=875.99 00:11:01.802 clat (usec): min=80, max=355, avg=124.56, stdev= 9.10 00:11:01.802 lat (usec): min=89, max=364, avg=133.78, stdev= 9.09 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 100], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 120], 00:11:01.802 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 126], 00:11:01.802 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 137], 00:11:01.802 | 99.00th=[ 149], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 190], 00:11:01.802 | 99.99th=[ 355] 00:11:01.802 write: IOPS=3918, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:11:01.802 slat (nsec): min=10320, max=38490, avg=11288.14, stdev=1069.11 00:11:01.802 clat (usec): min=76, max=168, avg=116.91, stdev= 8.27 00:11:01.802 lat (usec): min=87, max=179, avg=128.19, stdev= 8.30 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 87], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:11:01.802 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 119], 00:11:01.802 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 128], 00:11:01.802 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 165], 00:11:01.802 | 99.99th=[ 169] 00:11:01.802 bw ( KiB/s): min=16384, max=16384, per=24.89%, avg=16384.00, stdev= 0.00, samples=1 00:11:01.802 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:01.802 lat (usec) : 100=1.47%, 250=98.52%, 500=0.01% 00:11:01.802 cpu : usr=5.90%, sys=10.10%, ctx=7506, majf=0, minf=1 00:11:01.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 issued rwts: total=3584,3922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.802 job3: (groupid=0, jobs=1): err= 0: pid=2173885: Sun Nov 3 15:29:39 2024 00:11:01.802 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:01.802 slat (nsec): min=8625, max=22670, avg=9339.16, stdev=1072.06 00:11:01.802 clat (usec): min=80, max=364, avg=124.51, stdev= 9.04 00:11:01.802 lat (usec): min=89, max=387, avg=133.85, stdev= 9.12 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 105], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:11:01.802 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 126], 00:11:01.802 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 133], 95.00th=[ 137], 00:11:01.802 | 99.00th=[ 149], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 180], 00:11:01.802 | 99.99th=[ 367] 00:11:01.802 write: IOPS=3918, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:11:01.802 slat (nsec): min=10376, max=42010, avg=11385.61, stdev=1057.84 00:11:01.802 clat (usec): min=75, max=172, avg=116.77, stdev= 8.33 00:11:01.802 lat (usec): min=86, max=186, avg=128.15, stdev= 8.40 00:11:01.802 clat percentiles (usec): 00:11:01.802 | 1.00th=[ 87], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:11:01.802 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 119], 00:11:01.802 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 128], 00:11:01.802 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 174], 00:11:01.802 | 99.99th=[ 174] 00:11:01.802 bw ( KiB/s): min=16384, max=16384, per=24.89%, avg=16384.00, stdev= 0.00, samples=1 00:11:01.802 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:01.802 lat (usec) : 100=1.37%, 250=98.61%, 500=0.01% 00:11:01.802 cpu : usr=6.00%, sys=10.10%, ctx=7506, majf=0, minf=1 00:11:01.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.802 issued rwts: total=3584,3922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.802 00:11:01.802 Run status group 0 (all jobs): 00:11:01.802 READ: bw=59.9MiB/s (62.9MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.0MiB (62.9MB), run=1001-1001msec 00:11:01.802 WRITE: bw=64.3MiB/s (67.4MB/s), 15.3MiB/s-16.8MiB/s (16.0MB/s-17.7MB/s), io=64.3MiB (67.5MB), run=1001-1001msec 00:11:01.802 00:11:01.802 Disk stats (read/write): 00:11:01.802 nvme0n1: ios=3491/3584, merge=0/0, ticks=372/334, in_queue=706, util=84.47% 00:11:01.802 nvme0n2: ios=3440/3584, merge=0/0, ticks=356/333, in_queue=689, util=85.31% 00:11:01.802 nvme0n3: ios=3072/3187, merge=0/0, ticks=345/342, in_queue=687, util=88.47% 00:11:01.802 nvme0n4: ios=3072/3188, merge=0/0, ticks=363/344, in_queue=707, util=89.60% 00:11:01.802 15:29:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:01.802 [global] 00:11:01.802 thread=1 00:11:01.802 invalidate=1 00:11:01.802 rw=write 00:11:01.802 time_based=1 00:11:01.802 runtime=1 00:11:01.802 ioengine=libaio 00:11:01.802 direct=1 00:11:01.802 bs=4096 00:11:01.802 iodepth=128 00:11:01.802 norandommap=0 00:11:01.802 numjobs=1 00:11:01.802 00:11:01.802 verify_dump=1 00:11:01.802 verify_backlog=512 00:11:01.802 verify_state_save=0 00:11:01.802 do_verify=1 00:11:01.802 verify=crc32c-intel 00:11:01.802 [job0] 00:11:01.802 filename=/dev/nvme0n1 00:11:01.802 [job1] 00:11:01.802 filename=/dev/nvme0n2 00:11:01.802 [job2] 00:11:01.802 filename=/dev/nvme0n3 00:11:01.802 [job3] 00:11:01.802 filename=/dev/nvme0n4 00:11:01.802 Could not set queue depth (nvme0n1) 00:11:01.802 Could not set queue depth (nvme0n2) 00:11:01.802 Could not set queue depth (nvme0n3) 00:11:01.802 Could not set queue depth (nvme0n4) 00:11:02.061 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.061 fio-3.35 00:11:02.061 Starting 4 threads 00:11:03.439 00:11:03.439 job0: (groupid=0, jobs=1): err= 0: pid=2174295: Sun Nov 3 15:29:41 2024 00:11:03.439 read: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec) 00:11:03.439 slat (usec): min=2, max=1287, avg=59.64, stdev=215.40 00:11:03.439 clat (usec): min=6847, max=9466, avg=7780.07, stdev=356.49 00:11:03.439 lat (usec): min=7054, max=9469, avg=7839.71, stdev=399.39 00:11:03.439 clat percentiles (usec): 00:11:03.439 | 1.00th=[ 7177], 5.00th=[ 7308], 10.00th=[ 7439], 20.00th=[ 7504], 00:11:03.439 | 30.00th=[ 7635], 40.00th=[ 7635], 50.00th=[ 7701], 60.00th=[ 7767], 00:11:03.439 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8356], 95.00th=[ 8586], 00:11:03.439 | 99.00th=[ 8848], 99.50th=[ 8848], 99.90th=[ 9372], 99.95th=[ 9372], 00:11:03.439 | 99.99th=[ 9503] 00:11:03.439 write: IOPS=8534, BW=33.3MiB/s (35.0MB/s)(33.4MiB/1001msec); 0 zone resets 00:11:03.439 slat (usec): min=2, max=1541, avg=56.59, stdev=201.69 00:11:03.439 clat (usec): min=947, max=9199, avg=7380.87, stdev=559.45 00:11:03.439 lat (usec): min=956, max=9203, avg=7437.46, stdev=585.73 00:11:03.439 clat percentiles (usec): 00:11:03.439 | 1.00th=[ 5538], 5.00th=[ 6980], 10.00th=[ 7046], 20.00th=[ 7111], 00:11:03.439 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7373], 00:11:03.439 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 8029], 95.00th=[ 8225], 00:11:03.439 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[ 9110], 00:11:03.439 | 99.99th=[ 9241] 00:11:03.439 bw ( KiB/s): min=32928, max=32928, per=26.51%, avg=32928.00, stdev= 0.00, samples=1 00:11:03.439 iops : min= 8232, max= 8232, avg=8232.00, stdev= 0.00, samples=1 00:11:03.439 lat (usec) : 1000=0.03% 00:11:03.439 lat (msec) : 2=0.12%, 4=0.18%, 10=99.67% 00:11:03.439 cpu : usr=3.50%, sys=6.30%, ctx=1088, majf=0, minf=1 00:11:03.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:03.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.439 issued rwts: total=8192,8543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.439 job1: (groupid=0, jobs=1): err= 0: pid=2174304: Sun Nov 3 15:29:41 2024 00:11:03.439 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:11:03.439 slat (usec): min=2, max=1857, avg=60.33, stdev=221.76 00:11:03.439 clat (usec): min=6571, max=8911, avg=7890.39, stdev=291.92 00:11:03.439 lat (usec): min=6601, max=9483, avg=7950.71, stdev=255.84 00:11:03.439 clat percentiles (usec): 00:11:03.439 | 1.00th=[ 6849], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 7767], 00:11:03.439 | 30.00th=[ 7832], 40.00th=[ 7898], 50.00th=[ 7898], 60.00th=[ 7963], 00:11:03.439 | 70.00th=[ 8029], 80.00th=[ 8094], 90.00th=[ 8160], 95.00th=[ 8225], 00:11:03.439 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8586], 99.95th=[ 8848], 00:11:03.439 | 99.99th=[ 8848] 00:11:03.439 write: IOPS=8303, BW=32.4MiB/s (34.0MB/s)(32.5MiB/1003msec); 0 zone resets 00:11:03.439 slat (usec): min=2, max=1737, avg=57.67, stdev=210.54 00:11:03.439 clat (usec): min=1860, max=9465, avg=7498.85, stdev=474.63 00:11:03.439 lat (usec): min=2694, max=9513, avg=7556.51, stdev=458.20 00:11:03.439 clat percentiles (usec): 00:11:03.439 | 1.00th=[ 5800], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7308], 00:11:03.439 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7635], 00:11:03.439 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7832], 95.00th=[ 8029], 00:11:03.439 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[ 9372], 00:11:03.439 | 99.99th=[ 9503] 00:11:03.439 bw ( KiB/s): min=32768, max=32840, per=26.41%, avg=32804.00, stdev=50.91, samples=2 00:11:03.439 iops : min= 8192, max= 8210, avg=8201.00, stdev=12.73, samples=2 00:11:03.439 lat (msec) : 2=0.01%, 4=0.19%, 10=99.80% 00:11:03.439 cpu : usr=3.69%, sys=5.49%, ctx=1031, majf=0, minf=1 00:11:03.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:03.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.440 issued rwts: total=8192,8328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.440 job2: (groupid=0, jobs=1): err= 0: pid=2174319: Sun Nov 3 15:29:41 2024 00:11:03.440 read: IOPS=6848, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1005msec) 00:11:03.440 slat (usec): min=2, max=2336, avg=70.62, stdev=262.20 00:11:03.440 clat (usec): min=4353, max=14167, avg=9327.14, stdev=525.26 00:11:03.440 lat (usec): min=5304, max=14176, avg=9397.76, stdev=510.31 00:11:03.440 clat percentiles (usec): 00:11:03.440 | 1.00th=[ 7963], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9110], 00:11:03.440 | 30.00th=[ 9241], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9372], 00:11:03.440 | 70.00th=[ 9503], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9896], 00:11:03.440 | 99.00th=[10683], 99.50th=[11731], 99.90th=[13173], 99.95th=[13173], 00:11:03.440 | 99.99th=[14222] 00:11:03.440 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:11:03.440 slat (usec): min=2, max=2143, avg=67.95, stdev=251.63 00:11:03.440 clat (usec): min=1984, max=11127, avg=8834.67, stdev=635.91 00:11:03.440 lat (usec): min=2000, max=11139, avg=8902.62, stdev=630.66 00:11:03.440 clat percentiles (usec): 00:11:03.440 | 1.00th=[ 5735], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 8717], 00:11:03.440 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 8979], 00:11:03.440 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9241], 95.00th=[ 9372], 00:11:03.440 | 99.00th=[10159], 99.50th=[10159], 99.90th=[10552], 99.95th=[11076], 00:11:03.440 | 99.99th=[11076] 00:11:03.440 bw ( KiB/s): min=28672, max=28672, per=23.08%, avg=28672.00, stdev= 0.00, samples=2 00:11:03.440 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:11:03.440 lat (msec) : 2=0.05%, 4=0.24%, 10=96.86%, 20=2.85% 00:11:03.440 cpu : usr=3.29%, sys=6.08%, ctx=870, majf=0, minf=2 00:11:03.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:03.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.440 issued rwts: total=6883,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.440 job3: (groupid=0, jobs=1): err= 0: pid=2174326: Sun Nov 3 15:29:41 2024 00:11:03.440 read: IOPS=6655, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1002msec) 00:11:03.440 slat (usec): min=2, max=2327, avg=73.05, stdev=269.39 00:11:03.440 clat (usec): min=996, max=12490, avg=9382.98, stdev=792.02 00:11:03.440 lat (usec): min=1004, max=12493, avg=9456.03, stdev=820.14 00:11:03.440 clat percentiles (usec): 00:11:03.440 | 1.00th=[ 8094], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:11:03.440 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:03.440 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:11:03.440 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12125], 99.95th=[12125], 00:11:03.440 | 99.99th=[12518] 00:11:03.440 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:11:03.440 slat (usec): min=2, max=1884, avg=68.13, stdev=246.31 00:11:03.440 clat (usec): min=2101, max=11924, avg=8944.76, stdev=800.86 00:11:03.440 lat (usec): min=2103, max=11928, avg=9012.89, stdev=828.61 00:11:03.440 clat percentiles (usec): 00:11:03.440 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8160], 20.00th=[ 8455], 00:11:03.440 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:11:03.440 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10159], 00:11:03.440 | 99.00th=[10683], 99.50th=[11076], 99.90th=[11600], 99.95th=[11863], 00:11:03.440 | 99.99th=[11863] 00:11:03.440 bw ( KiB/s): min=28672, max=28672, per=23.08%, avg=28672.00, stdev= 0.00, samples=1 00:11:03.440 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:11:03.440 lat (usec) : 1000=0.01% 00:11:03.440 lat (msec) : 2=0.04%, 4=0.23%, 10=85.28%, 20=14.45% 00:11:03.440 cpu : usr=3.50%, sys=4.90%, ctx=984, majf=0, minf=1 00:11:03.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:03.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.440 issued rwts: total=6669,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.440 00:11:03.440 Run status group 0 (all jobs): 00:11:03.440 READ: bw=116MiB/s (122MB/s), 26.0MiB/s-32.0MiB/s (27.3MB/s-33.5MB/s), io=117MiB (123MB), run=1001-1005msec 00:11:03.440 WRITE: bw=121MiB/s (127MB/s), 27.9MiB/s-33.3MiB/s (29.2MB/s-35.0MB/s), io=122MiB (128MB), run=1001-1005msec 00:11:03.440 00:11:03.440 Disk stats (read/write): 00:11:03.440 nvme0n1: ios=6796/7168, merge=0/0, ticks=13054/13111, in_queue=26165, util=84.47% 00:11:03.440 nvme0n2: ios=6656/7047, merge=0/0, ticks=25878/26172, in_queue=52050, util=85.22% 00:11:03.440 nvme0n3: ios=5632/6016, merge=0/0, ticks=51402/52084, in_queue=103486, util=88.37% 00:11:03.440 nvme0n4: ios=5632/5905, merge=0/0, ticks=13060/13014, in_queue=26074, util=89.51% 00:11:03.440 15:29:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:03.440 [global] 00:11:03.440 thread=1 00:11:03.440 invalidate=1 00:11:03.440 rw=randwrite 00:11:03.440 time_based=1 00:11:03.440 runtime=1 00:11:03.440 ioengine=libaio 00:11:03.440 direct=1 00:11:03.440 bs=4096 00:11:03.440 iodepth=128 00:11:03.440 norandommap=0 00:11:03.440 numjobs=1 00:11:03.440 00:11:03.440 verify_dump=1 00:11:03.440 verify_backlog=512 00:11:03.440 verify_state_save=0 00:11:03.440 do_verify=1 00:11:03.440 verify=crc32c-intel 00:11:03.440 [job0] 00:11:03.440 filename=/dev/nvme0n1 00:11:03.440 [job1] 00:11:03.440 filename=/dev/nvme0n2 00:11:03.440 [job2] 00:11:03.440 filename=/dev/nvme0n3 00:11:03.440 [job3] 00:11:03.440 filename=/dev/nvme0n4 00:11:03.440 Could not set queue depth (nvme0n1) 00:11:03.440 Could not set queue depth (nvme0n2) 00:11:03.440 Could not set queue depth (nvme0n3) 00:11:03.440 Could not set queue depth (nvme0n4) 00:11:03.699 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.699 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.699 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.699 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.699 fio-3.35 00:11:03.699 Starting 4 threads 00:11:05.078 00:11:05.078 job0: (groupid=0, jobs=1): err= 0: pid=2174749: Sun Nov 3 15:29:42 2024 00:11:05.078 read: IOPS=8188, BW=32.0MiB/s (33.5MB/s)(32.1MiB/1002msec) 00:11:05.078 slat (usec): min=2, max=1396, avg=59.07, stdev=223.84 00:11:05.078 clat (usec): min=674, max=8299, avg=7702.97, stdev=392.36 00:11:05.078 lat (usec): min=1853, max=8302, avg=7762.03, stdev=324.44 00:11:05.078 clat percentiles (usec): 00:11:05.078 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7504], 00:11:05.078 | 30.00th=[ 7635], 40.00th=[ 7701], 50.00th=[ 7767], 60.00th=[ 7832], 00:11:05.078 | 70.00th=[ 7898], 80.00th=[ 7963], 90.00th=[ 8029], 95.00th=[ 8094], 00:11:05.078 | 99.00th=[ 8160], 99.50th=[ 8225], 99.90th=[ 8291], 99.95th=[ 8291], 00:11:05.078 | 99.99th=[ 8291] 00:11:05.078 write: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec); 0 zone resets 00:11:05.078 slat (usec): min=2, max=2175, avg=55.76, stdev=211.35 00:11:05.078 clat (usec): min=1983, max=8949, avg=7316.77, stdev=431.56 00:11:05.078 lat (usec): min=2896, max=8961, avg=7372.53, stdev=379.82 00:11:05.078 clat percentiles (usec): 00:11:05.078 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7177], 00:11:05.078 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7439], 00:11:05.078 | 70.00th=[ 7504], 80.00th=[ 7570], 90.00th=[ 7635], 95.00th=[ 7701], 00:11:05.078 | 99.00th=[ 8094], 99.50th=[ 8160], 99.90th=[ 8717], 99.95th=[ 8717], 00:11:05.078 | 99.99th=[ 8979] 00:11:05.078 bw ( KiB/s): min=33896, max=34824, per=27.31%, avg=34360.00, stdev=656.20, samples=2 00:11:05.078 iops : min= 8474, max= 8706, avg=8590.00, stdev=164.05, samples=2 00:11:05.078 lat (usec) : 750=0.01% 00:11:05.078 lat (msec) : 2=0.08%, 4=0.17%, 10=99.75% 00:11:05.078 cpu : usr=3.30%, sys=7.29%, ctx=1066, majf=0, minf=1 00:11:05.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:05.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.078 issued rwts: total=8205,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.078 job1: (groupid=0, jobs=1): err= 0: pid=2174764: Sun Nov 3 15:29:42 2024 00:11:05.078 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:11:05.078 slat (usec): min=2, max=1882, avg=59.72, stdev=216.53 00:11:05.078 clat (usec): min=6399, max=9407, avg=7765.07, stdev=508.89 00:11:05.078 lat (usec): min=6411, max=9410, avg=7824.79, stdev=534.74 00:11:05.078 clat percentiles (usec): 00:11:05.078 | 1.00th=[ 6718], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:11:05.078 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7701], 60.00th=[ 7767], 00:11:05.079 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 8848], 00:11:05.079 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9372], 99.95th=[ 9372], 00:11:05.079 | 99.99th=[ 9372] 00:11:05.079 write: IOPS=8525, BW=33.3MiB/s (34.9MB/s)(33.4MiB/1002msec); 0 zone resets 00:11:05.079 slat (usec): min=2, max=1164, avg=56.60, stdev=200.21 00:11:05.079 clat (usec): min=507, max=9039, avg=7394.54, stdev=645.99 00:11:05.079 lat (usec): min=1247, max=9042, avg=7451.14, stdev=664.33 00:11:05.079 clat percentiles (usec): 00:11:05.079 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7177], 00:11:05.079 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7439], 00:11:05.079 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[ 8356], 00:11:05.079 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[ 8979], 99.95th=[ 8979], 00:11:05.079 | 99.99th=[ 8979] 00:11:05.079 bw ( KiB/s): min=33144, max=33144, per=26.34%, avg=33144.00, stdev= 0.00, samples=1 00:11:05.079 iops : min= 8286, max= 8286, avg=8286.00, stdev= 0.00, samples=1 00:11:05.079 lat (usec) : 750=0.01% 00:11:05.079 lat (msec) : 2=0.09%, 4=0.19%, 10=99.71% 00:11:05.079 cpu : usr=3.20%, sys=6.59%, ctx=1141, majf=0, minf=1 00:11:05.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:05.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.079 issued rwts: total=8192,8543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.079 job2: (groupid=0, jobs=1): err= 0: pid=2174788: Sun Nov 3 15:29:42 2024 00:11:05.079 read: IOPS=6653, BW=26.0MiB/s (27.3MB/s)(26.0MiB/1002msec) 00:11:05.079 slat (usec): min=2, max=1609, avg=72.13, stdev=275.29 00:11:05.079 clat (usec): min=1751, max=10007, avg=9384.91, stdev=469.72 00:11:05.079 lat (usec): min=1759, max=10504, avg=9457.05, stdev=384.56 00:11:05.079 clat percentiles (usec): 00:11:05.079 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9241], 00:11:05.079 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:11:05.079 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[ 9765], 95.00th=[ 9896], 00:11:05.079 | 99.00th=[10028], 99.50th=[10028], 99.90th=[10028], 99.95th=[10028], 00:11:05.079 | 99.99th=[10028] 00:11:05.079 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:11:05.079 slat (usec): min=2, max=1512, avg=69.02, stdev=260.66 00:11:05.079 clat (usec): min=1890, max=10519, avg=8965.94, stdev=562.76 00:11:05.079 lat (usec): min=3157, max=10905, avg=9034.97, stdev=501.32 00:11:05.079 clat percentiles (usec): 00:11:05.079 | 1.00th=[ 6718], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 8848], 00:11:05.079 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9110], 00:11:05.079 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9503], 00:11:05.079 | 99.00th=[ 9634], 99.50th=[ 9634], 99.90th=[10552], 99.95th=[10552], 00:11:05.079 | 99.99th=[10552] 00:11:05.079 bw ( KiB/s): min=27744, max=28672, per=22.42%, avg=28208.00, stdev=656.20, samples=2 00:11:05.079 iops : min= 6936, max= 7168, avg=7052.00, stdev=164.05, samples=2 00:11:05.079 lat (msec) : 2=0.09%, 4=0.12%, 10=99.57%, 20=0.22% 00:11:05.079 cpu : usr=3.30%, sys=5.19%, ctx=867, majf=0, minf=1 00:11:05.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:05.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.079 issued rwts: total=6667,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.079 job3: (groupid=0, jobs=1): err= 0: pid=2174796: Sun Nov 3 15:29:42 2024 00:11:05.079 read: IOPS=6756, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1004msec) 00:11:05.079 slat (usec): min=2, max=1443, avg=71.71, stdev=267.35 00:11:05.079 clat (usec): min=2029, max=11074, avg=9333.63, stdev=603.42 00:11:05.079 lat (usec): min=3119, max=11112, avg=9405.34, stdev=586.46 00:11:05.079 clat percentiles (usec): 00:11:05.079 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9241], 00:11:05.079 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:11:05.079 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9765], 95.00th=[ 9765], 00:11:05.079 | 99.00th=[10028], 99.50th=[10159], 99.90th=[10683], 99.95th=[11076], 00:11:05.079 | 99.99th=[11076] 00:11:05.079 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:11:05.079 slat (usec): min=2, max=3161, avg=68.06, stdev=252.88 00:11:05.079 clat (usec): min=6602, max=10172, avg=8895.14, stdev=364.99 00:11:05.079 lat (usec): min=6609, max=10644, avg=8963.21, stdev=341.55 00:11:05.079 clat percentiles (usec): 00:11:05.079 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 8717], 00:11:05.079 | 30.00th=[ 8848], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 8979], 00:11:05.079 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9241], 95.00th=[ 9372], 00:11:05.079 | 99.00th=[ 9503], 99.50th=[ 9503], 99.90th=[ 9634], 99.95th=[ 9634], 00:11:05.079 | 99.99th=[10159] 00:11:05.079 bw ( KiB/s): min=28672, max=28672, per=22.79%, avg=28672.00, stdev= 0.00, samples=2 00:11:05.079 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:11:05.079 lat (msec) : 4=0.18%, 10=99.38%, 20=0.44% 00:11:05.079 cpu : usr=3.89%, sys=5.08%, ctx=876, majf=0, minf=1 00:11:05.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:05.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.079 issued rwts: total=6784,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.079 00:11:05.079 Run status group 0 (all jobs): 00:11:05.079 READ: bw=116MiB/s (122MB/s), 26.0MiB/s-32.0MiB/s (27.3MB/s-33.5MB/s), io=117MiB (122MB), run=1002-1004msec 00:11:05.079 WRITE: bw=123MiB/s (129MB/s), 27.9MiB/s-33.9MiB/s (29.2MB/s-35.6MB/s), io=123MiB (129MB), run=1002-1004msec 00:11:05.079 00:11:05.079 Disk stats (read/write): 00:11:05.079 nvme0n1: ios=6854/7168, merge=0/0, ticks=17146/16830, in_queue=33976, util=83.95% 00:11:05.079 nvme0n2: ios=6673/7168, merge=0/0, ticks=12995/13101, in_queue=26096, util=84.87% 00:11:05.079 nvme0n3: ios=5632/5794, merge=0/0, ticks=17160/16911, in_queue=34071, util=88.30% 00:11:05.079 nvme0n4: ios=5632/5887, merge=0/0, ticks=25886/25412, in_queue=51298, util=89.44% 00:11:05.079 15:29:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:05.079 15:29:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2174979 00:11:05.079 15:29:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:05.079 15:29:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:05.079 [global] 00:11:05.079 thread=1 00:11:05.079 invalidate=1 00:11:05.079 rw=read 00:11:05.079 time_based=1 00:11:05.079 runtime=10 00:11:05.079 ioengine=libaio 00:11:05.079 direct=1 00:11:05.079 bs=4096 00:11:05.079 iodepth=1 00:11:05.079 norandommap=1 00:11:05.079 numjobs=1 00:11:05.079 00:11:05.079 [job0] 00:11:05.079 filename=/dev/nvme0n1 00:11:05.079 [job1] 00:11:05.079 filename=/dev/nvme0n2 00:11:05.079 [job2] 00:11:05.079 filename=/dev/nvme0n3 00:11:05.079 [job3] 00:11:05.079 filename=/dev/nvme0n4 00:11:05.079 Could not set queue depth (nvme0n1) 00:11:05.079 Could not set queue depth (nvme0n2) 00:11:05.079 Could not set queue depth (nvme0n3) 00:11:05.079 Could not set queue depth (nvme0n4) 00:11:05.338 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.338 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.338 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.338 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.338 fio-3.35 00:11:05.338 Starting 4 threads 00:11:08.628 15:29:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:08.628 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=71499776, buflen=4096 00:11:08.628 fio: pid=2175230, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.628 15:29:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:08.628 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=114630656, buflen=4096 00:11:08.628 fio: pid=2175223, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.628 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.628 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:08.628 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=34033664, buflen=4096 00:11:08.628 fio: pid=2175194, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.628 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.628 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:08.888 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=30965760, buflen=4096 00:11:08.888 fio: pid=2175204, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.888 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.888 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:08.888 00:11:08.888 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2175194: Sun Nov 3 15:29:46 2024 00:11:08.888 read: IOPS=8152, BW=31.8MiB/s (33.4MB/s)(96.5MiB/3029msec) 00:11:08.888 slat (usec): min=7, max=15741, avg=11.30, stdev=146.39 00:11:08.888 clat (usec): min=49, max=22321, avg=108.96, stdev=201.90 00:11:08.888 lat (usec): min=57, max=22330, avg=120.26, stdev=249.58 00:11:08.888 clat percentiles (usec): 00:11:08.888 | 1.00th=[ 64], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:11:08.888 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 101], 00:11:08.888 | 70.00th=[ 139], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 169], 00:11:08.888 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 212], 99.95th=[ 217], 00:11:08.888 | 99.99th=[ 330] 00:11:08.888 bw ( KiB/s): min=24864, max=44224, per=28.96%, avg=33356.80, stdev=9248.37, samples=5 00:11:08.888 iops : min= 6216, max=11056, avg=8339.20, stdev=2312.09, samples=5 00:11:08.888 lat (usec) : 50=0.02%, 100=59.79%, 250=40.18%, 500=0.01% 00:11:08.888 lat (msec) : 50=0.01% 00:11:08.888 cpu : usr=3.53%, sys=11.72%, ctx=24701, majf=0, minf=2 00:11:08.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 issued rwts: total=24694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.888 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2175204: Sun Nov 3 15:29:46 2024 00:11:08.888 read: IOPS=7329, BW=28.6MiB/s (30.0MB/s)(93.5MiB/3267msec) 00:11:08.888 slat (usec): min=8, max=8863, avg=11.66, stdev=130.21 00:11:08.888 clat (usec): min=36, max=22058, avg=122.39, stdev=201.94 00:11:08.888 lat (usec): min=56, max=22068, avg=134.05, stdev=240.04 00:11:08.888 clat percentiles (usec): 00:11:08.888 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 77], 00:11:08.888 | 30.00th=[ 103], 40.00th=[ 123], 50.00th=[ 131], 60.00th=[ 139], 00:11:08.888 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 172], 00:11:08.888 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 215], 99.95th=[ 219], 00:11:08.888 | 99.99th=[ 262] 00:11:08.888 bw ( KiB/s): min=24848, max=35139, per=24.26%, avg=27940.50, stdev=3802.85, samples=6 00:11:08.888 iops : min= 6212, max= 8784, avg=6985.00, stdev=950.43, samples=6 00:11:08.888 lat (usec) : 50=0.05%, 100=29.51%, 250=70.43%, 500=0.01% 00:11:08.888 lat (msec) : 50=0.01% 00:11:08.888 cpu : usr=3.34%, sys=10.75%, ctx=23951, majf=0, minf=1 00:11:08.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 issued rwts: total=23945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.888 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2175223: Sun Nov 3 15:29:46 2024 00:11:08.888 read: IOPS=9878, BW=38.6MiB/s (40.5MB/s)(109MiB/2833msec) 00:11:08.888 slat (usec): min=2, max=11908, avg= 9.65, stdev=84.93 00:11:08.888 clat (usec): min=52, max=283, avg=89.31, stdev=14.79 00:11:08.888 lat (usec): min=61, max=12011, avg=98.96, stdev=86.36 00:11:08.888 clat percentiles (usec): 00:11:08.888 | 1.00th=[ 68], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:11:08.888 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:11:08.888 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 101], 95.00th=[ 123], 00:11:08.888 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 194], 99.95th=[ 202], 00:11:08.888 | 99.99th=[ 235] 00:11:08.888 bw ( KiB/s): min=34240, max=42136, per=34.83%, avg=40121.60, stdev=3336.66, samples=5 00:11:08.888 iops : min= 8560, max=10534, avg=10030.40, stdev=834.17, samples=5 00:11:08.888 lat (usec) : 100=89.71%, 250=10.28%, 500=0.01% 00:11:08.888 cpu : usr=5.19%, sys=13.38%, ctx=27990, majf=0, minf=2 00:11:08.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 issued rwts: total=27987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.888 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2175230: Sun Nov 3 15:29:46 2024 00:11:08.888 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(68.2MiB/2636msec) 00:11:08.888 slat (nsec): min=8252, max=41646, avg=10046.47, stdev=2537.91 00:11:08.888 clat (usec): min=72, max=253, avg=138.23, stdev=22.91 00:11:08.888 lat (usec): min=84, max=263, avg=148.28, stdev=22.91 00:11:08.888 clat percentiles (usec): 00:11:08.888 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 111], 20.00th=[ 124], 00:11:08.888 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:11:08.888 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 176], 00:11:08.888 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 215], 00:11:08.888 | 99.99th=[ 231] 00:11:08.888 bw ( KiB/s): min=24952, max=29384, per=23.28%, avg=26820.80, stdev=1995.34, samples=5 00:11:08.888 iops : min= 6238, max= 7346, avg=6705.20, stdev=498.84, samples=5 00:11:08.888 lat (usec) : 100=7.68%, 250=92.31%, 500=0.01% 00:11:08.888 cpu : usr=3.30%, sys=9.37%, ctx=17457, majf=0, minf=2 00:11:08.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.888 issued rwts: total=17457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.888 00:11:08.888 Run status group 0 (all jobs): 00:11:08.888 READ: bw=112MiB/s (118MB/s), 25.9MiB/s-38.6MiB/s (27.1MB/s-40.5MB/s), io=367MiB (385MB), run=2636-3267msec 00:11:08.888 00:11:08.888 Disk stats (read/write): 00:11:08.888 nvme0n1: ios=22852/0, merge=0/0, ticks=2355/0, in_queue=2355, util=93.85% 00:11:08.888 nvme0n2: ios=21667/0, merge=0/0, ticks=2640/0, in_queue=2640, util=94.27% 00:11:08.888 nvme0n3: ios=25962/0, merge=0/0, ticks=2133/0, in_queue=2133, util=96.06% 00:11:08.888 nvme0n4: ios=17273/0, merge=0/0, ticks=2249/0, in_queue=2249, util=96.46% 00:11:09.148 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.148 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:09.407 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.407 15:29:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:09.666 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.666 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:09.666 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.666 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:09.926 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:09.926 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2174979 00:11:09.926 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:09.926 15:29:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:10.862 nvmf hotplug test: fio failed as expected 00:11:10.862 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:11.122 rmmod nvme_rdma 00:11:11.122 rmmod nvme_fabrics 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2172135 ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2172135 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2172135 ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2172135 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2172135 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2172135' 00:11:11.122 killing process with pid 2172135 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2172135 00:11:11.122 15:29:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2172135 00:11:11.381 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.382 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:11.382 00:11:11.382 real 0m26.087s 00:11:11.382 user 2m7.417s 00:11:11.382 sys 0m9.895s 00:11:11.382 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.382 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.382 ************************************ 00:11:11.382 END TEST nvmf_fio_target 00:11:11.382 ************************************ 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 ************************************ 00:11:11.641 START TEST nvmf_bdevio 00:11:11.641 ************************************ 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:11.641 * Looking for test storage... 00:11:11.641 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.641 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:11.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.642 --rc genhtml_branch_coverage=1 00:11:11.642 --rc genhtml_function_coverage=1 00:11:11.642 --rc genhtml_legend=1 00:11:11.642 --rc geninfo_all_blocks=1 00:11:11.642 --rc geninfo_unexecuted_blocks=1 00:11:11.642 00:11:11.642 ' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:11.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.642 --rc genhtml_branch_coverage=1 00:11:11.642 --rc genhtml_function_coverage=1 00:11:11.642 --rc genhtml_legend=1 00:11:11.642 --rc geninfo_all_blocks=1 00:11:11.642 --rc geninfo_unexecuted_blocks=1 00:11:11.642 00:11:11.642 ' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:11.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.642 --rc genhtml_branch_coverage=1 00:11:11.642 --rc genhtml_function_coverage=1 00:11:11.642 --rc genhtml_legend=1 00:11:11.642 --rc geninfo_all_blocks=1 00:11:11.642 --rc geninfo_unexecuted_blocks=1 00:11:11.642 00:11:11.642 ' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:11.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.642 --rc genhtml_branch_coverage=1 00:11:11.642 --rc genhtml_function_coverage=1 00:11:11.642 --rc genhtml_legend=1 00:11:11.642 --rc geninfo_all_blocks=1 00:11:11.642 --rc geninfo_unexecuted_blocks=1 00:11:11.642 00:11:11.642 ' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.642 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.642 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.902 15:29:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:18.477 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:18.477 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:18.477 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:18.477 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:18.477 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:18.478 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.478 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:18.478 altname enp217s0f0np0 00:11:18.478 altname ens818f0np0 00:11:18.478 inet 192.168.100.8/24 scope global mlx_0_0 00:11:18.478 valid_lft forever preferred_lft forever 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:18.478 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.478 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:18.478 altname enp217s0f1np1 00:11:18.478 altname ens818f1np1 00:11:18.478 inet 192.168.100.9/24 scope global mlx_0_1 00:11:18.478 valid_lft forever preferred_lft forever 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:18.478 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:18.738 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.738 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:18.739 192.168.100.9' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:18.739 192.168.100.9' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:18.739 192.168.100.9' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2179663 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2179663 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2179663 ']' 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:18.739 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.739 [2024-11-03 15:29:56.383456] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:18.739 [2024-11-03 15:29:56.383518] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.739 [2024-11-03 15:29:56.463937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.739 [2024-11-03 15:29:56.486819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.739 [2024-11-03 15:29:56.486859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.739 [2024-11-03 15:29:56.486868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.739 [2024-11-03 15:29:56.486877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.739 [2024-11-03 15:29:56.486900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.739 [2024-11-03 15:29:56.488555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.739 [2024-11-03 15:29:56.488647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:18.739 [2024-11-03 15:29:56.488756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.739 [2024-11-03 15:29:56.488758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.999 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.999 [2024-11-03 15:29:56.660871] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x229c550/0x22a0a00) succeed. 00:11:18.999 [2024-11-03 15:29:56.670069] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x229db90/0x22e20a0) succeed. 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.259 Malloc0 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.259 [2024-11-03 15:29:56.844589] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:19.259 { 00:11:19.259 "params": { 00:11:19.259 "name": "Nvme$subsystem", 00:11:19.259 "trtype": "$TEST_TRANSPORT", 00:11:19.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.259 "adrfam": "ipv4", 00:11:19.259 "trsvcid": "$NVMF_PORT", 00:11:19.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.259 "hdgst": ${hdgst:-false}, 00:11:19.259 "ddgst": ${ddgst:-false} 00:11:19.259 }, 00:11:19.259 "method": "bdev_nvme_attach_controller" 00:11:19.259 } 00:11:19.259 EOF 00:11:19.259 )") 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:19.259 15:29:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:19.259 "params": { 00:11:19.259 "name": "Nvme1", 00:11:19.259 "trtype": "rdma", 00:11:19.259 "traddr": "192.168.100.8", 00:11:19.259 "adrfam": "ipv4", 00:11:19.259 "trsvcid": "4420", 00:11:19.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.259 "hdgst": false, 00:11:19.259 "ddgst": false 00:11:19.259 }, 00:11:19.259 "method": "bdev_nvme_attach_controller" 00:11:19.259 }' 00:11:19.259 [2024-11-03 15:29:56.894847] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:19.259 [2024-11-03 15:29:56.894896] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179696 ] 00:11:19.259 [2024-11-03 15:29:56.978228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.259 [2024-11-03 15:29:57.003780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.259 [2024-11-03 15:29:57.003874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.259 [2024-11-03 15:29:57.003877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.520 I/O targets: 00:11:19.520 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:19.520 00:11:19.520 00:11:19.520 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.520 http://cunit.sourceforge.net/ 00:11:19.520 00:11:19.520 00:11:19.520 Suite: bdevio tests on: Nvme1n1 00:11:19.520 Test: blockdev write read block ...passed 00:11:19.520 Test: blockdev write zeroes read block ...passed 00:11:19.520 Test: blockdev write zeroes read no split ...passed 00:11:19.520 Test: blockdev write zeroes read split ...passed 00:11:19.520 Test: blockdev write zeroes read split partial ...passed 00:11:19.520 Test: blockdev reset ...[2024-11-03 15:29:57.203790] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:19.520 [2024-11-03 15:29:57.226325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:19.520 [2024-11-03 15:29:57.253421] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:19.520 passed 00:11:19.520 Test: blockdev write read 8 blocks ...passed 00:11:19.520 Test: blockdev write read size > 128k ...passed 00:11:19.520 Test: blockdev write read invalid size ...passed 00:11:19.520 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.520 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.520 Test: blockdev write read max offset ...passed 00:11:19.520 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.520 Test: blockdev writev readv 8 blocks ...passed 00:11:19.520 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.520 Test: blockdev writev readv block ...passed 00:11:19.520 Test: blockdev writev readv size > 128k ...passed 00:11:19.520 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.520 Test: blockdev comparev and writev ...[2024-11-03 15:29:57.256328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.256990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:19.520 [2024-11-03 15:29:57.256999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:19.520 passed 00:11:19.520 Test: blockdev nvme passthru rw ...passed 00:11:19.520 Test: blockdev nvme passthru vendor specific ...[2024-11-03 15:29:57.257253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:19.520 [2024-11-03 15:29:57.257265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.257312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:19.520 [2024-11-03 15:29:57.257323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.257369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:19.520 [2024-11-03 15:29:57.257380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:19.520 [2024-11-03 15:29:57.257424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:19.520 [2024-11-03 15:29:57.257434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:19.520 passed 00:11:19.520 Test: blockdev nvme admin passthru ...passed 00:11:19.520 Test: blockdev copy ...passed 00:11:19.520 00:11:19.520 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.520 suites 1 1 n/a 0 0 00:11:19.520 tests 23 23 23 0 0 00:11:19.520 asserts 152 152 152 0 n/a 00:11:19.520 00:11:19.520 Elapsed time = 0.171 seconds 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:19.780 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:19.781 rmmod nvme_rdma 00:11:19.781 rmmod nvme_fabrics 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2179663 ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2179663 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2179663 ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2179663 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2179663 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2179663' 00:11:19.781 killing process with pid 2179663 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2179663 00:11:19.781 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2179663 00:11:20.041 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.041 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:20.041 00:11:20.041 real 0m8.596s 00:11:20.041 user 0m8.087s 00:11:20.041 sys 0m5.858s 00:11:20.041 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.041 15:29:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.041 ************************************ 00:11:20.041 END TEST nvmf_bdevio 00:11:20.041 ************************************ 00:11:20.301 15:29:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:20.301 00:11:20.301 real 4m6.389s 00:11:20.301 user 10m42.661s 00:11:20.301 sys 1m35.214s 00:11:20.301 15:29:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.301 15:29:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.301 ************************************ 00:11:20.301 END TEST nvmf_target_core 00:11:20.301 ************************************ 00:11:20.301 15:29:57 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:20.301 15:29:57 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.301 15:29:57 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.301 15:29:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:20.301 ************************************ 00:11:20.301 START TEST nvmf_target_extra 00:11:20.301 ************************************ 00:11:20.301 15:29:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:20.301 * Looking for test storage... 00:11:20.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:20.301 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.301 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.301 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.562 --rc genhtml_branch_coverage=1 00:11:20.562 --rc genhtml_function_coverage=1 00:11:20.562 --rc genhtml_legend=1 00:11:20.562 --rc geninfo_all_blocks=1 00:11:20.562 --rc geninfo_unexecuted_blocks=1 00:11:20.562 00:11:20.562 ' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.562 --rc genhtml_branch_coverage=1 00:11:20.562 --rc genhtml_function_coverage=1 00:11:20.562 --rc genhtml_legend=1 00:11:20.562 --rc geninfo_all_blocks=1 00:11:20.562 --rc geninfo_unexecuted_blocks=1 00:11:20.562 00:11:20.562 ' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.562 --rc genhtml_branch_coverage=1 00:11:20.562 --rc genhtml_function_coverage=1 00:11:20.562 --rc genhtml_legend=1 00:11:20.562 --rc geninfo_all_blocks=1 00:11:20.562 --rc geninfo_unexecuted_blocks=1 00:11:20.562 00:11:20.562 ' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.562 --rc genhtml_branch_coverage=1 00:11:20.562 --rc genhtml_function_coverage=1 00:11:20.562 --rc genhtml_legend=1 00:11:20.562 --rc geninfo_all_blocks=1 00:11:20.562 --rc geninfo_unexecuted_blocks=1 00:11:20.562 00:11:20.562 ' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.562 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.563 ************************************ 00:11:20.563 START TEST nvmf_example 00:11:20.563 ************************************ 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:20.563 * Looking for test storage... 00:11:20.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.563 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.823 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.824 --rc genhtml_branch_coverage=1 00:11:20.824 --rc genhtml_function_coverage=1 00:11:20.824 --rc genhtml_legend=1 00:11:20.824 --rc geninfo_all_blocks=1 00:11:20.824 --rc geninfo_unexecuted_blocks=1 00:11:20.824 00:11:20.824 ' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.824 --rc genhtml_branch_coverage=1 00:11:20.824 --rc genhtml_function_coverage=1 00:11:20.824 --rc genhtml_legend=1 00:11:20.824 --rc geninfo_all_blocks=1 00:11:20.824 --rc geninfo_unexecuted_blocks=1 00:11:20.824 00:11:20.824 ' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.824 --rc genhtml_branch_coverage=1 00:11:20.824 --rc genhtml_function_coverage=1 00:11:20.824 --rc genhtml_legend=1 00:11:20.824 --rc geninfo_all_blocks=1 00:11:20.824 --rc geninfo_unexecuted_blocks=1 00:11:20.824 00:11:20.824 ' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.824 --rc genhtml_branch_coverage=1 00:11:20.824 --rc genhtml_function_coverage=1 00:11:20.824 --rc genhtml_legend=1 00:11:20.824 --rc geninfo_all_blocks=1 00:11:20.824 --rc geninfo_unexecuted_blocks=1 00:11:20.824 00:11:20.824 ' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.824 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.824 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.825 15:29:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.952 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:28.953 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:28.953 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:28.953 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:28.953 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:28.953 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:28.953 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:28.953 altname enp217s0f0np0 00:11:28.953 altname ens818f0np0 00:11:28.953 inet 192.168.100.8/24 scope global mlx_0_0 00:11:28.953 valid_lft forever preferred_lft forever 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:28.953 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:28.954 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:28.954 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:28.954 altname enp217s0f1np1 00:11:28.954 altname ens818f1np1 00:11:28.954 inet 192.168.100.9/24 scope global mlx_0_1 00:11:28.954 valid_lft forever preferred_lft forever 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:28.954 192.168.100.9' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:28.954 192.168.100.9' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:28.954 192.168.100.9' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2183950 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2183950 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2183950 ']' 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:28.954 15:30:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.954 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:29.214 15:30:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:41.438 Initializing NVMe Controllers 00:11:41.438 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.438 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:41.438 Initialization complete. Launching workers. 00:11:41.438 ======================================================== 00:11:41.438 Latency(us) 00:11:41.438 Device Information : IOPS MiB/s Average min max 00:11:41.438 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26026.15 101.66 2458.68 616.35 13975.92 00:11:41.438 ======================================================== 00:11:41.438 Total : 26026.15 101.66 2458.68 616.35 13975.92 00:11:41.438 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:41.438 rmmod nvme_rdma 00:11:41.438 rmmod nvme_fabrics 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2183950 ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2183950 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2183950 ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2183950 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2183950 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2183950' 00:11:41.438 killing process with pid 2183950 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2183950 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2183950 00:11:41.438 nvmf threads initialize successfully 00:11:41.438 bdev subsystem init successfully 00:11:41.438 created a nvmf target service 00:11:41.438 create targets's poll groups done 00:11:41.438 all subsystems of target started 00:11:41.438 nvmf target is running 00:11:41.438 all subsystems of target stopped 00:11:41.438 destroy targets's poll groups done 00:11:41.438 destroyed the nvmf target service 00:11:41.438 bdev subsystem finish successfully 00:11:41.438 nvmf threads destroy successfully 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 00:11:41.438 real 0m20.251s 00:11:41.438 user 0m52.635s 00:11:41.438 sys 0m6.000s 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 ************************************ 00:11:41.438 END TEST nvmf_example 00:11:41.438 ************************************ 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 ************************************ 00:11:41.438 START TEST nvmf_filesystem 00:11:41.438 ************************************ 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:41.438 * Looking for test storage... 00:11:41.438 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.438 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.439 --rc genhtml_branch_coverage=1 00:11:41.439 --rc genhtml_function_coverage=1 00:11:41.439 --rc genhtml_legend=1 00:11:41.439 --rc geninfo_all_blocks=1 00:11:41.439 --rc geninfo_unexecuted_blocks=1 00:11:41.439 00:11:41.439 ' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.439 --rc genhtml_branch_coverage=1 00:11:41.439 --rc genhtml_function_coverage=1 00:11:41.439 --rc genhtml_legend=1 00:11:41.439 --rc geninfo_all_blocks=1 00:11:41.439 --rc geninfo_unexecuted_blocks=1 00:11:41.439 00:11:41.439 ' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.439 --rc genhtml_branch_coverage=1 00:11:41.439 --rc genhtml_function_coverage=1 00:11:41.439 --rc genhtml_legend=1 00:11:41.439 --rc geninfo_all_blocks=1 00:11:41.439 --rc geninfo_unexecuted_blocks=1 00:11:41.439 00:11:41.439 ' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.439 --rc genhtml_branch_coverage=1 00:11:41.439 --rc genhtml_function_coverage=1 00:11:41.439 --rc genhtml_legend=1 00:11:41.439 --rc geninfo_all_blocks=1 00:11:41.439 --rc geninfo_unexecuted_blocks=1 00:11:41.439 00:11:41.439 ' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:41.439 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:41.440 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:41.440 #define SPDK_CONFIG_H 00:11:41.440 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:41.440 #define SPDK_CONFIG_APPS 1 00:11:41.440 #define SPDK_CONFIG_ARCH native 00:11:41.440 #undef SPDK_CONFIG_ASAN 00:11:41.440 #undef SPDK_CONFIG_AVAHI 00:11:41.440 #undef SPDK_CONFIG_CET 00:11:41.440 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:41.440 #define SPDK_CONFIG_COVERAGE 1 00:11:41.440 #define SPDK_CONFIG_CROSS_PREFIX 00:11:41.440 #undef SPDK_CONFIG_CRYPTO 00:11:41.440 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:41.440 #undef SPDK_CONFIG_CUSTOMOCF 00:11:41.440 #undef SPDK_CONFIG_DAOS 00:11:41.440 #define SPDK_CONFIG_DAOS_DIR 00:11:41.440 #define SPDK_CONFIG_DEBUG 1 00:11:41.440 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:41.440 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:41.440 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:41.440 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:41.440 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:41.440 #undef SPDK_CONFIG_DPDK_UADK 00:11:41.440 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:41.440 #define SPDK_CONFIG_EXAMPLES 1 00:11:41.440 #undef SPDK_CONFIG_FC 00:11:41.440 #define SPDK_CONFIG_FC_PATH 00:11:41.440 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:41.440 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:41.440 #define SPDK_CONFIG_FSDEV 1 00:11:41.440 #undef SPDK_CONFIG_FUSE 00:11:41.440 #undef SPDK_CONFIG_FUZZER 00:11:41.440 #define SPDK_CONFIG_FUZZER_LIB 00:11:41.440 #undef SPDK_CONFIG_GOLANG 00:11:41.440 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:41.440 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:41.440 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:41.440 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:41.440 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:41.440 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:41.440 #undef SPDK_CONFIG_HAVE_LZ4 00:11:41.440 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:41.440 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:41.440 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:41.440 #define SPDK_CONFIG_IDXD 1 00:11:41.440 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:41.440 #undef SPDK_CONFIG_IPSEC_MB 00:11:41.440 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:41.440 #define SPDK_CONFIG_ISAL 1 00:11:41.440 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:41.440 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:41.440 #define SPDK_CONFIG_LIBDIR 00:11:41.440 #undef SPDK_CONFIG_LTO 00:11:41.440 #define SPDK_CONFIG_MAX_LCORES 128 00:11:41.440 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:41.440 #define SPDK_CONFIG_NVME_CUSE 1 00:11:41.440 #undef SPDK_CONFIG_OCF 00:11:41.440 #define SPDK_CONFIG_OCF_PATH 00:11:41.441 #define SPDK_CONFIG_OPENSSL_PATH 00:11:41.441 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:41.441 #define SPDK_CONFIG_PGO_DIR 00:11:41.441 #undef SPDK_CONFIG_PGO_USE 00:11:41.441 #define SPDK_CONFIG_PREFIX /usr/local 00:11:41.441 #undef SPDK_CONFIG_RAID5F 00:11:41.441 #undef SPDK_CONFIG_RBD 00:11:41.441 #define SPDK_CONFIG_RDMA 1 00:11:41.441 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:41.441 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:41.441 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:41.441 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:41.441 #define SPDK_CONFIG_SHARED 1 00:11:41.441 #undef SPDK_CONFIG_SMA 00:11:41.441 #define SPDK_CONFIG_TESTS 1 00:11:41.441 #undef SPDK_CONFIG_TSAN 00:11:41.441 #define SPDK_CONFIG_UBLK 1 00:11:41.441 #define SPDK_CONFIG_UBSAN 1 00:11:41.441 #undef SPDK_CONFIG_UNIT_TESTS 00:11:41.441 #undef SPDK_CONFIG_URING 00:11:41.441 #define SPDK_CONFIG_URING_PATH 00:11:41.441 #undef SPDK_CONFIG_URING_ZNS 00:11:41.441 #undef SPDK_CONFIG_USDT 00:11:41.441 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:41.441 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:41.441 #undef SPDK_CONFIG_VFIO_USER 00:11:41.441 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:41.441 #define SPDK_CONFIG_VHOST 1 00:11:41.441 #define SPDK_CONFIG_VIRTIO 1 00:11:41.441 #undef SPDK_CONFIG_VTUNE 00:11:41.441 #define SPDK_CONFIG_VTUNE_DIR 00:11:41.441 #define SPDK_CONFIG_WERROR 1 00:11:41.441 #define SPDK_CONFIG_WPDK_DIR 00:11:41.441 #undef SPDK_CONFIG_XNVME 00:11:41.441 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.441 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:41.442 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:41.443 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2186301 ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2186301 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.u30TAd 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.u30TAd/tests/target /tmp/spdk.u30TAd 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53664460800 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730615296 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8066154496 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30804684800 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865305600 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=60620800 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12323037184 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346126336 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23089152 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30864068608 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865309696 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1241088 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173048832 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173061120 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:41.444 * Looking for test storage... 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53664460800 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10280747008 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.444 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.444 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.445 --rc genhtml_branch_coverage=1 00:11:41.445 --rc genhtml_function_coverage=1 00:11:41.445 --rc genhtml_legend=1 00:11:41.445 --rc geninfo_all_blocks=1 00:11:41.445 --rc geninfo_unexecuted_blocks=1 00:11:41.445 00:11:41.445 ' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.445 --rc genhtml_branch_coverage=1 00:11:41.445 --rc genhtml_function_coverage=1 00:11:41.445 --rc genhtml_legend=1 00:11:41.445 --rc geninfo_all_blocks=1 00:11:41.445 --rc geninfo_unexecuted_blocks=1 00:11:41.445 00:11:41.445 ' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.445 --rc genhtml_branch_coverage=1 00:11:41.445 --rc genhtml_function_coverage=1 00:11:41.445 --rc genhtml_legend=1 00:11:41.445 --rc geninfo_all_blocks=1 00:11:41.445 --rc geninfo_unexecuted_blocks=1 00:11:41.445 00:11:41.445 ' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.445 --rc genhtml_branch_coverage=1 00:11:41.445 --rc genhtml_function_coverage=1 00:11:41.445 --rc genhtml_legend=1 00:11:41.445 --rc geninfo_all_blocks=1 00:11:41.445 --rc geninfo_unexecuted_blocks=1 00:11:41.445 00:11:41.445 ' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.445 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.446 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.446 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:48.108 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:48.108 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:48.108 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:48.109 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:48.109 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:48.109 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.109 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:48.109 altname enp217s0f0np0 00:11:48.109 altname ens818f0np0 00:11:48.109 inet 192.168.100.8/24 scope global mlx_0_0 00:11:48.109 valid_lft forever preferred_lft forever 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:48.109 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.109 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:48.109 altname enp217s0f1np1 00:11:48.109 altname ens818f1np1 00:11:48.109 inet 192.168.100.9/24 scope global mlx_0_1 00:11:48.109 valid_lft forever preferred_lft forever 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:48.109 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.392 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:48.392 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.392 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.392 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.392 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:48.393 192.168.100.9' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:48.393 192.168.100.9' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:48.393 192.168.100.9' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.393 ************************************ 00:11:48.393 START TEST nvmf_filesystem_no_in_capsule 00:11:48.393 ************************************ 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:48.393 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2189627 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2189627 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2189627 ']' 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.393 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.393 [2024-11-03 15:30:26.056658] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:48.393 [2024-11-03 15:30:26.056707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.393 [2024-11-03 15:30:26.138798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.393 [2024-11-03 15:30:26.161485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.393 [2024-11-03 15:30:26.161523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.393 [2024-11-03 15:30:26.161533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.393 [2024-11-03 15:30:26.161541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.393 [2024-11-03 15:30:26.161548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.393 [2024-11-03 15:30:26.163125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.393 [2024-11-03 15:30:26.163151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.393 [2024-11-03 15:30:26.163236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.393 [2024-11-03 15:30:26.163238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.653 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.653 [2024-11-03 15:30:26.314365] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:48.653 [2024-11-03 15:30:26.335870] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10b5c50/0x10ba100) succeed. 00:11:48.653 [2024-11-03 15:30:26.345133] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10b7290/0x10fb7a0) succeed. 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.912 Malloc1 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.912 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.912 [2024-11-03 15:30:26.603041] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:48.913 { 00:11:48.913 "name": "Malloc1", 00:11:48.913 "aliases": [ 00:11:48.913 "148f532d-9175-486c-a456-1f1212b8b212" 00:11:48.913 ], 00:11:48.913 "product_name": "Malloc disk", 00:11:48.913 "block_size": 512, 00:11:48.913 "num_blocks": 1048576, 00:11:48.913 "uuid": "148f532d-9175-486c-a456-1f1212b8b212", 00:11:48.913 "assigned_rate_limits": { 00:11:48.913 "rw_ios_per_sec": 0, 00:11:48.913 "rw_mbytes_per_sec": 0, 00:11:48.913 "r_mbytes_per_sec": 0, 00:11:48.913 "w_mbytes_per_sec": 0 00:11:48.913 }, 00:11:48.913 "claimed": true, 00:11:48.913 "claim_type": "exclusive_write", 00:11:48.913 "zoned": false, 00:11:48.913 "supported_io_types": { 00:11:48.913 "read": true, 00:11:48.913 "write": true, 00:11:48.913 "unmap": true, 00:11:48.913 "flush": true, 00:11:48.913 "reset": true, 00:11:48.913 "nvme_admin": false, 00:11:48.913 "nvme_io": false, 00:11:48.913 "nvme_io_md": false, 00:11:48.913 "write_zeroes": true, 00:11:48.913 "zcopy": true, 00:11:48.913 "get_zone_info": false, 00:11:48.913 "zone_management": false, 00:11:48.913 "zone_append": false, 00:11:48.913 "compare": false, 00:11:48.913 "compare_and_write": false, 00:11:48.913 "abort": true, 00:11:48.913 "seek_hole": false, 00:11:48.913 "seek_data": false, 00:11:48.913 "copy": true, 00:11:48.913 "nvme_iov_md": false 00:11:48.913 }, 00:11:48.913 "memory_domains": [ 00:11:48.913 { 00:11:48.913 "dma_device_id": "system", 00:11:48.913 "dma_device_type": 1 00:11:48.913 }, 00:11:48.913 { 00:11:48.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.913 "dma_device_type": 2 00:11:48.913 } 00:11:48.913 ], 00:11:48.913 "driver_specific": {} 00:11:48.913 } 00:11:48.913 ]' 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:48.913 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:49.173 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:49.173 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:49.173 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:49.173 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.173 15:30:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:50.110 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.110 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:50.110 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.110 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:50.110 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:52.024 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:52.025 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:52.288 15:30:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.224 ************************************ 00:11:53.224 START TEST filesystem_ext4 00:11:53.224 ************************************ 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:53.224 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:53.225 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:53.225 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:53.225 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:53.225 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:53.225 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:53.225 mke2fs 1.47.0 (5-Feb-2023) 00:11:53.484 Discarding device blocks: 0/522240 done 00:11:53.484 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:53.484 Filesystem UUID: 5707af06-c9be-4dde-868d-f97af74c9a3e 00:11:53.484 Superblock backups stored on blocks: 00:11:53.484 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:53.484 00:11:53.484 Allocating group tables: 0/64 done 00:11:53.484 Writing inode tables: 0/64 done 00:11:53.484 Creating journal (8192 blocks): done 00:11:53.484 Writing superblocks and filesystem accounting information: 0/64 done 00:11:53.484 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2189627 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.484 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.485 00:11:53.485 real 0m0.199s 00:11:53.485 user 0m0.032s 00:11:53.485 sys 0m0.073s 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.485 ************************************ 00:11:53.485 END TEST filesystem_ext4 00:11:53.485 ************************************ 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.485 ************************************ 00:11:53.485 START TEST filesystem_btrfs 00:11:53.485 ************************************ 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:53.485 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.745 btrfs-progs v6.8.1 00:11:53.745 See https://btrfs.readthedocs.io for more information. 00:11:53.745 00:11:53.745 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.745 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.745 this does not affect your deployments: 00:11:53.745 - DUP for metadata (-m dup) 00:11:53.745 - enabled no-holes (-O no-holes) 00:11:53.745 - enabled free-space-tree (-R free-space-tree) 00:11:53.745 00:11:53.745 Label: (null) 00:11:53.745 UUID: 4b35c036-ee12-4351-a125-6459e6cb01bc 00:11:53.745 Node size: 16384 00:11:53.745 Sector size: 4096 (CPU page size: 4096) 00:11:53.745 Filesystem size: 510.00MiB 00:11:53.745 Block group profiles: 00:11:53.745 Data: single 8.00MiB 00:11:53.745 Metadata: DUP 32.00MiB 00:11:53.745 System: DUP 8.00MiB 00:11:53.746 SSD detected: yes 00:11:53.746 Zoned device: no 00:11:53.746 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.746 Checksum: crc32c 00:11:53.746 Number of devices: 1 00:11:53.746 Devices: 00:11:53.746 ID SIZE PATH 00:11:53.746 1 510.00MiB /dev/nvme0n1p1 00:11:53.746 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2189627 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.746 00:11:53.746 real 0m0.254s 00:11:53.746 user 0m0.034s 00:11:53.746 sys 0m0.128s 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.746 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.746 ************************************ 00:11:53.746 END TEST filesystem_btrfs 00:11:53.746 ************************************ 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.024 ************************************ 00:11:54.024 START TEST filesystem_xfs 00:11:54.024 ************************************ 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.024 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:54.025 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:54.025 = sectsz=512 attr=2, projid32bit=1 00:11:54.025 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:54.025 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:54.025 data = bsize=4096 blocks=130560, imaxpct=25 00:11:54.025 = sunit=0 swidth=0 blks 00:11:54.025 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:54.025 log =internal log bsize=4096 blocks=16384, version=2 00:11:54.025 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:54.025 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.025 Discarding blocks...Done. 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2189627 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.025 00:11:54.025 real 0m0.206s 00:11:54.025 user 0m0.034s 00:11:54.025 sys 0m0.075s 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.025 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.025 ************************************ 00:11:54.025 END TEST filesystem_xfs 00:11:54.025 ************************************ 00:11:54.309 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:54.309 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:54.309 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2189627 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2189627 ']' 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2189627 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2189627 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2189627' 00:11:55.267 killing process with pid 2189627 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2189627 00:11:55.267 15:30:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2189627 00:11:55.528 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:55.528 00:11:55.528 real 0m7.306s 00:11:55.528 user 0m28.583s 00:11:55.528 sys 0m1.185s 00:11:55.528 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:55.528 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.528 ************************************ 00:11:55.528 END TEST nvmf_filesystem_no_in_capsule 00:11:55.528 ************************************ 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:55.788 ************************************ 00:11:55.788 START TEST nvmf_filesystem_in_capsule 00:11:55.788 ************************************ 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:55.788 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2191172 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2191172 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2191172 ']' 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:55.789 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 [2024-11-03 15:30:33.447862] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:55.789 [2024-11-03 15:30:33.447907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.789 [2024-11-03 15:30:33.524331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.789 [2024-11-03 15:30:33.546710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.789 [2024-11-03 15:30:33.546748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.789 [2024-11-03 15:30:33.546757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.789 [2024-11-03 15:30:33.546765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.789 [2024-11-03 15:30:33.546772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.789 [2024-11-03 15:30:33.548307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.789 [2024-11-03 15:30:33.548402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.789 [2024-11-03 15:30:33.548466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.789 [2024-11-03 15:30:33.548468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.049 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.049 [2024-11-03 15:30:33.708969] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e5fc50/0x1e64100) succeed. 00:11:56.049 [2024-11-03 15:30:33.717948] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e61290/0x1ea57a0) succeed. 00:11:56.309 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 Malloc1 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 [2024-11-03 15:30:33.994154] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:56.310 { 00:11:56.310 "name": "Malloc1", 00:11:56.310 "aliases": [ 00:11:56.310 "9630a363-3efd-449a-a32b-fb2986ffba4e" 00:11:56.310 ], 00:11:56.310 "product_name": "Malloc disk", 00:11:56.310 "block_size": 512, 00:11:56.310 "num_blocks": 1048576, 00:11:56.310 "uuid": "9630a363-3efd-449a-a32b-fb2986ffba4e", 00:11:56.310 "assigned_rate_limits": { 00:11:56.310 "rw_ios_per_sec": 0, 00:11:56.310 "rw_mbytes_per_sec": 0, 00:11:56.310 "r_mbytes_per_sec": 0, 00:11:56.310 "w_mbytes_per_sec": 0 00:11:56.310 }, 00:11:56.310 "claimed": true, 00:11:56.310 "claim_type": "exclusive_write", 00:11:56.310 "zoned": false, 00:11:56.310 "supported_io_types": { 00:11:56.310 "read": true, 00:11:56.310 "write": true, 00:11:56.310 "unmap": true, 00:11:56.310 "flush": true, 00:11:56.310 "reset": true, 00:11:56.310 "nvme_admin": false, 00:11:56.310 "nvme_io": false, 00:11:56.310 "nvme_io_md": false, 00:11:56.310 "write_zeroes": true, 00:11:56.310 "zcopy": true, 00:11:56.310 "get_zone_info": false, 00:11:56.310 "zone_management": false, 00:11:56.310 "zone_append": false, 00:11:56.310 "compare": false, 00:11:56.310 "compare_and_write": false, 00:11:56.310 "abort": true, 00:11:56.310 "seek_hole": false, 00:11:56.310 "seek_data": false, 00:11:56.310 "copy": true, 00:11:56.310 "nvme_iov_md": false 00:11:56.310 }, 00:11:56.310 "memory_domains": [ 00:11:56.310 { 00:11:56.310 "dma_device_id": "system", 00:11:56.310 "dma_device_type": 1 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.310 "dma_device_type": 2 00:11:56.310 } 00:11:56.310 ], 00:11:56.310 "driver_specific": {} 00:11:56.310 } 00:11:56.310 ]' 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:56.310 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:56.570 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:56.570 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:56.570 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:56.570 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:56.570 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:57.507 15:30:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.507 15:30:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:57.507 15:30:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.507 15:30:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:57.507 15:30:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:59.412 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:59.670 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.608 ************************************ 00:12:00.608 START TEST filesystem_in_capsule_ext4 00:12:00.608 ************************************ 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:00.608 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:00.608 mke2fs 1.47.0 (5-Feb-2023) 00:12:00.868 Discarding device blocks: 0/522240 done 00:12:00.868 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:00.868 Filesystem UUID: 70bf8a6d-9506-4380-a227-4905db24bff8 00:12:00.868 Superblock backups stored on blocks: 00:12:00.868 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:00.868 00:12:00.868 Allocating group tables: 0/64 done 00:12:00.868 Writing inode tables: 0/64 done 00:12:00.868 Creating journal (8192 blocks): done 00:12:00.868 Writing superblocks and filesystem accounting information: 0/64 done 00:12:00.868 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2191172 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.868 00:12:00.868 real 0m0.201s 00:12:00.868 user 0m0.028s 00:12:00.868 sys 0m0.078s 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:00.868 ************************************ 00:12:00.868 END TEST filesystem_in_capsule_ext4 00:12:00.868 ************************************ 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.868 ************************************ 00:12:00.868 START TEST filesystem_in_capsule_btrfs 00:12:00.868 ************************************ 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:00.868 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:01.128 btrfs-progs v6.8.1 00:12:01.128 See https://btrfs.readthedocs.io for more information. 00:12:01.128 00:12:01.128 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:01.128 NOTE: several default settings have changed in version 5.15, please make sure 00:12:01.128 this does not affect your deployments: 00:12:01.128 - DUP for metadata (-m dup) 00:12:01.128 - enabled no-holes (-O no-holes) 00:12:01.128 - enabled free-space-tree (-R free-space-tree) 00:12:01.128 00:12:01.128 Label: (null) 00:12:01.128 UUID: 11152db0-06be-45e0-baaa-14053b19f1c3 00:12:01.128 Node size: 16384 00:12:01.128 Sector size: 4096 (CPU page size: 4096) 00:12:01.128 Filesystem size: 510.00MiB 00:12:01.128 Block group profiles: 00:12:01.128 Data: single 8.00MiB 00:12:01.128 Metadata: DUP 32.00MiB 00:12:01.128 System: DUP 8.00MiB 00:12:01.128 SSD detected: yes 00:12:01.128 Zoned device: no 00:12:01.128 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:01.128 Checksum: crc32c 00:12:01.128 Number of devices: 1 00:12:01.128 Devices: 00:12:01.128 ID SIZE PATH 00:12:01.128 1 510.00MiB /dev/nvme0n1p1 00:12:01.128 00:12:01.128 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2191172 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.129 00:12:01.129 real 0m0.249s 00:12:01.129 user 0m0.032s 00:12:01.129 sys 0m0.125s 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.129 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.129 ************************************ 00:12:01.129 END TEST filesystem_in_capsule_btrfs 00:12:01.129 ************************************ 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.388 ************************************ 00:12:01.388 START TEST filesystem_in_capsule_xfs 00:12:01.388 ************************************ 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:01.388 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:01.389 15:30:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:01.389 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:01.389 = sectsz=512 attr=2, projid32bit=1 00:12:01.389 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:01.389 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:01.389 data = bsize=4096 blocks=130560, imaxpct=25 00:12:01.389 = sunit=0 swidth=0 blks 00:12:01.389 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:01.389 log =internal log bsize=4096 blocks=16384, version=2 00:12:01.389 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:01.389 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:01.389 Discarding blocks...Done. 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2191172 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.389 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.648 00:12:01.648 real 0m0.220s 00:12:01.648 user 0m0.033s 00:12:01.648 sys 0m0.080s 00:12:01.648 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.648 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.648 ************************************ 00:12:01.648 END TEST filesystem_in_capsule_xfs 00:12:01.648 ************************************ 00:12:01.648 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.648 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.648 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2191172 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2191172 ']' 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2191172 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2191172 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2191172' 00:12:02.587 killing process with pid 2191172 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2191172 00:12:02.587 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2191172 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:03.157 00:12:03.157 real 0m7.327s 00:12:03.157 user 0m28.559s 00:12:03.157 sys 0m1.262s 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 END TEST nvmf_filesystem_in_capsule 00:12:03.157 ************************************ 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:03.157 rmmod nvme_rdma 00:12:03.157 rmmod nvme_fabrics 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:03.157 00:12:03.157 real 0m22.284s 00:12:03.157 user 0m59.432s 00:12:03.157 sys 0m8.056s 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 END TEST nvmf_filesystem 00:12:03.157 ************************************ 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 START TEST nvmf_target_discovery 00:12:03.157 ************************************ 00:12:03.157 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:03.417 * Looking for test storage... 00:12:03.417 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:03.417 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:03.417 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:03.417 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.417 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.418 --rc genhtml_branch_coverage=1 00:12:03.418 --rc genhtml_function_coverage=1 00:12:03.418 --rc genhtml_legend=1 00:12:03.418 --rc geninfo_all_blocks=1 00:12:03.418 --rc geninfo_unexecuted_blocks=1 00:12:03.418 00:12:03.418 ' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.418 --rc genhtml_branch_coverage=1 00:12:03.418 --rc genhtml_function_coverage=1 00:12:03.418 --rc genhtml_legend=1 00:12:03.418 --rc geninfo_all_blocks=1 00:12:03.418 --rc geninfo_unexecuted_blocks=1 00:12:03.418 00:12:03.418 ' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.418 --rc genhtml_branch_coverage=1 00:12:03.418 --rc genhtml_function_coverage=1 00:12:03.418 --rc genhtml_legend=1 00:12:03.418 --rc geninfo_all_blocks=1 00:12:03.418 --rc geninfo_unexecuted_blocks=1 00:12:03.418 00:12:03.418 ' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.418 --rc genhtml_branch_coverage=1 00:12:03.418 --rc genhtml_function_coverage=1 00:12:03.418 --rc genhtml_legend=1 00:12:03.418 --rc geninfo_all_blocks=1 00:12:03.418 --rc geninfo_unexecuted_blocks=1 00:12:03.418 00:12:03.418 ' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.418 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.418 15:30:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:09.991 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:09.991 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:09.991 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:09.991 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:09.991 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:09.992 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:09.992 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:09.992 altname enp217s0f0np0 00:12:09.992 altname ens818f0np0 00:12:09.992 inet 192.168.100.8/24 scope global mlx_0_0 00:12:09.992 valid_lft forever preferred_lft forever 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:09.992 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:09.992 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:09.992 altname enp217s0f1np1 00:12:09.992 altname ens818f1np1 00:12:09.992 inet 192.168.100.9/24 scope global mlx_0_1 00:12:09.992 valid_lft forever preferred_lft forever 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:09.992 192.168.100.9' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:09.992 192.168.100.9' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:09.992 192.168.100.9' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:09.992 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2195870 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2195870 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2195870 ']' 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:09.993 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.993 [2024-11-03 15:30:47.704958] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:09.993 [2024-11-03 15:30:47.705024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.253 [2024-11-03 15:30:47.782721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.253 [2024-11-03 15:30:47.805532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.253 [2024-11-03 15:30:47.805574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.253 [2024-11-03 15:30:47.805583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.253 [2024-11-03 15:30:47.805591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.253 [2024-11-03 15:30:47.805617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.253 [2024-11-03 15:30:47.807164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.253 [2024-11-03 15:30:47.807257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.253 [2024-11-03 15:30:47.807325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.253 [2024-11-03 15:30:47.807327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.253 15:30:47 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.253 [2024-11-03 15:30:47.980674] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x918c50/0x91d100) succeed. 00:12:10.253 [2024-11-03 15:30:47.989935] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91a290/0x95e7a0) succeed. 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 Null1 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 [2024-11-03 15:30:48.173257] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 Null2 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.514 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.514 Null3 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 Null4 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.515 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:12:10.775 00:12:10.775 Discovery Log Number of Records 6, Generation counter 6 00:12:10.775 =====Discovery Log Entry 0====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: current discovery subsystem 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4420 00:12:10.775 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: explicit discovery connections, duplicate discovery information 00:12:10.775 rdma_prtype: not specified 00:12:10.775 rdma_qptype: connected 00:12:10.775 rdma_cms: rdma-cm 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 =====Discovery Log Entry 1====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: nvme subsystem 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4420 00:12:10.775 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: none 00:12:10.775 rdma_prtype: not specified 00:12:10.775 rdma_qptype: connected 00:12:10.775 rdma_cms: rdma-cm 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 =====Discovery Log Entry 2====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: nvme subsystem 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4420 00:12:10.775 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: none 00:12:10.775 rdma_prtype: not specified 00:12:10.775 rdma_qptype: connected 00:12:10.775 rdma_cms: rdma-cm 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 =====Discovery Log Entry 3====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: nvme subsystem 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4420 00:12:10.775 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: none 00:12:10.775 rdma_prtype: not specified 00:12:10.775 rdma_qptype: connected 00:12:10.775 rdma_cms: rdma-cm 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 =====Discovery Log Entry 4====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: nvme subsystem 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4420 00:12:10.775 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: none 00:12:10.775 rdma_prtype: not specified 00:12:10.775 rdma_qptype: connected 00:12:10.775 rdma_cms: rdma-cm 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 =====Discovery Log Entry 5====== 00:12:10.775 trtype: rdma 00:12:10.775 adrfam: ipv4 00:12:10.775 subtype: discovery subsystem referral 00:12:10.775 treq: not required 00:12:10.775 portid: 0 00:12:10.775 trsvcid: 4430 00:12:10.775 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.775 traddr: 192.168.100.8 00:12:10.775 eflags: none 00:12:10.775 rdma_prtype: unrecognized 00:12:10.775 rdma_qptype: unrecognized 00:12:10.775 rdma_cms: unrecognized 00:12:10.775 rdma_pkey: 0x0000 00:12:10.775 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:10.775 Perform nvmf subsystem discovery via RPC 00:12:10.775 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:10.775 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.775 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.775 [ 00:12:10.775 { 00:12:10.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:10.775 "subtype": "Discovery", 00:12:10.776 "listen_addresses": [ 00:12:10.776 { 00:12:10.776 "trtype": "RDMA", 00:12:10.776 "adrfam": "IPv4", 00:12:10.776 "traddr": "192.168.100.8", 00:12:10.776 "trsvcid": "4420" 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "allow_any_host": true, 00:12:10.776 "hosts": [] 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.776 "subtype": "NVMe", 00:12:10.776 "listen_addresses": [ 00:12:10.776 { 00:12:10.776 "trtype": "RDMA", 00:12:10.776 "adrfam": "IPv4", 00:12:10.776 "traddr": "192.168.100.8", 00:12:10.776 "trsvcid": "4420" 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "allow_any_host": true, 00:12:10.776 "hosts": [], 00:12:10.776 "serial_number": "SPDK00000000000001", 00:12:10.776 "model_number": "SPDK bdev Controller", 00:12:10.776 "max_namespaces": 32, 00:12:10.776 "min_cntlid": 1, 00:12:10.776 "max_cntlid": 65519, 00:12:10.776 "namespaces": [ 00:12:10.776 { 00:12:10.776 "nsid": 1, 00:12:10.776 "bdev_name": "Null1", 00:12:10.776 "name": "Null1", 00:12:10.776 "nguid": "BA5DBD37E44745AD91385D50B1F08831", 00:12:10.776 "uuid": "ba5dbd37-e447-45ad-9138-5d50b1f08831" 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:10.776 "subtype": "NVMe", 00:12:10.776 "listen_addresses": [ 00:12:10.776 { 00:12:10.776 "trtype": "RDMA", 00:12:10.776 "adrfam": "IPv4", 00:12:10.776 "traddr": "192.168.100.8", 00:12:10.776 "trsvcid": "4420" 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "allow_any_host": true, 00:12:10.776 "hosts": [], 00:12:10.776 "serial_number": "SPDK00000000000002", 00:12:10.776 "model_number": "SPDK bdev Controller", 00:12:10.776 "max_namespaces": 32, 00:12:10.776 "min_cntlid": 1, 00:12:10.776 "max_cntlid": 65519, 00:12:10.776 "namespaces": [ 00:12:10.776 { 00:12:10.776 "nsid": 1, 00:12:10.776 "bdev_name": "Null2", 00:12:10.776 "name": "Null2", 00:12:10.776 "nguid": "AF47D1E9F4D24C2FAE5E4D4C6775982D", 00:12:10.776 "uuid": "af47d1e9-f4d2-4c2f-ae5e-4d4c6775982d" 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:10.776 "subtype": "NVMe", 00:12:10.776 "listen_addresses": [ 00:12:10.776 { 00:12:10.776 "trtype": "RDMA", 00:12:10.776 "adrfam": "IPv4", 00:12:10.776 "traddr": "192.168.100.8", 00:12:10.776 "trsvcid": "4420" 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "allow_any_host": true, 00:12:10.776 "hosts": [], 00:12:10.776 "serial_number": "SPDK00000000000003", 00:12:10.776 "model_number": "SPDK bdev Controller", 00:12:10.776 "max_namespaces": 32, 00:12:10.776 "min_cntlid": 1, 00:12:10.776 "max_cntlid": 65519, 00:12:10.776 "namespaces": [ 00:12:10.776 { 00:12:10.776 "nsid": 1, 00:12:10.776 "bdev_name": "Null3", 00:12:10.776 "name": "Null3", 00:12:10.776 "nguid": "3BC9F8228C214412B01C7390C54058A1", 00:12:10.776 "uuid": "3bc9f822-8c21-4412-b01c-7390c54058a1" 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:10.776 "subtype": "NVMe", 00:12:10.776 "listen_addresses": [ 00:12:10.776 { 00:12:10.776 "trtype": "RDMA", 00:12:10.776 "adrfam": "IPv4", 00:12:10.776 "traddr": "192.168.100.8", 00:12:10.776 "trsvcid": "4420" 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "allow_any_host": true, 00:12:10.776 "hosts": [], 00:12:10.776 "serial_number": "SPDK00000000000004", 00:12:10.776 "model_number": "SPDK bdev Controller", 00:12:10.776 "max_namespaces": 32, 00:12:10.776 "min_cntlid": 1, 00:12:10.776 "max_cntlid": 65519, 00:12:10.776 "namespaces": [ 00:12:10.776 { 00:12:10.776 "nsid": 1, 00:12:10.776 "bdev_name": "Null4", 00:12:10.776 "name": "Null4", 00:12:10.776 "nguid": "D493F83DBBBF4B709098D91C5CF8F622", 00:12:10.776 "uuid": "d493f83d-bbbf-4b70-9098-d91c5cf8f622" 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.777 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:10.777 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.777 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.777 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:10.777 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.036 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:11.036 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:11.037 rmmod nvme_rdma 00:12:11.037 rmmod nvme_fabrics 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2195870 ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2195870 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2195870 ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2195870 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2195870 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2195870' 00:12:11.037 killing process with pid 2195870 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2195870 00:12:11.037 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2195870 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:11.297 00:12:11.297 real 0m8.019s 00:12:11.297 user 0m6.325s 00:12:11.297 sys 0m5.378s 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.297 ************************************ 00:12:11.297 END TEST nvmf_target_discovery 00:12:11.297 ************************************ 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.297 15:30:48 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.297 ************************************ 00:12:11.297 START TEST nvmf_referrals 00:12:11.297 ************************************ 00:12:11.297 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:11.558 * Looking for test storage... 00:12:11.558 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:11.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.558 --rc genhtml_branch_coverage=1 00:12:11.558 --rc genhtml_function_coverage=1 00:12:11.558 --rc genhtml_legend=1 00:12:11.558 --rc geninfo_all_blocks=1 00:12:11.558 --rc geninfo_unexecuted_blocks=1 00:12:11.558 00:12:11.558 ' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:11.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.558 --rc genhtml_branch_coverage=1 00:12:11.558 --rc genhtml_function_coverage=1 00:12:11.558 --rc genhtml_legend=1 00:12:11.558 --rc geninfo_all_blocks=1 00:12:11.558 --rc geninfo_unexecuted_blocks=1 00:12:11.558 00:12:11.558 ' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:11.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.558 --rc genhtml_branch_coverage=1 00:12:11.558 --rc genhtml_function_coverage=1 00:12:11.558 --rc genhtml_legend=1 00:12:11.558 --rc geninfo_all_blocks=1 00:12:11.558 --rc geninfo_unexecuted_blocks=1 00:12:11.558 00:12:11.558 ' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:11.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.558 --rc genhtml_branch_coverage=1 00:12:11.558 --rc genhtml_function_coverage=1 00:12:11.558 --rc genhtml_legend=1 00:12:11.558 --rc geninfo_all_blocks=1 00:12:11.558 --rc geninfo_unexecuted_blocks=1 00:12:11.558 00:12:11.558 ' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.558 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.559 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.559 15:30:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:18.136 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:18.136 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:18.136 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:18.136 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.136 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:18.137 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.137 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:18.137 altname enp217s0f0np0 00:12:18.137 altname ens818f0np0 00:12:18.137 inet 192.168.100.8/24 scope global mlx_0_0 00:12:18.137 valid_lft forever preferred_lft forever 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:18.137 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.137 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:18.137 altname enp217s0f1np1 00:12:18.137 altname ens818f1np1 00:12:18.137 inet 192.168.100.9/24 scope global mlx_0_1 00:12:18.137 valid_lft forever preferred_lft forever 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.137 192.168.100.9' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:18.137 192.168.100.9' 00:12:18.137 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:18.406 192.168.100.9' 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.406 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2199483 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2199483 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2199483 ']' 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.407 15:30:55 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.407 [2024-11-03 15:30:56.021122] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:18.407 [2024-11-03 15:30:56.021175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.407 [2024-11-03 15:30:56.098730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.407 [2024-11-03 15:30:56.121136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.407 [2024-11-03 15:30:56.121177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.407 [2024-11-03 15:30:56.121187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.407 [2024-11-03 15:30:56.121195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.407 [2024-11-03 15:30:56.121202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.407 [2024-11-03 15:30:56.122764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.407 [2024-11-03 15:30:56.122862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.407 [2024-11-03 15:30:56.122963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.407 [2024-11-03 15:30:56.122965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.671 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.672 [2024-11-03 15:30:56.283815] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe06c50/0xe0b100) succeed. 00:12:18.672 [2024-11-03 15:30:56.292767] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe08290/0xe4c7a0) succeed. 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.672 [2024-11-03 15:30:56.432395] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.672 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.932 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:18.933 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.192 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.193 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.193 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.193 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.452 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:19.453 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.711 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.973 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:19.973 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.973 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:19.973 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.974 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:20.326 rmmod nvme_rdma 00:12:20.326 rmmod nvme_fabrics 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2199483 ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2199483 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2199483 ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2199483 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2199483 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2199483' 00:12:20.326 killing process with pid 2199483 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2199483 00:12:20.326 15:30:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2199483 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:20.585 00:12:20.585 real 0m9.110s 00:12:20.585 user 0m10.535s 00:12:20.585 sys 0m6.023s 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.585 ************************************ 00:12:20.585 END TEST nvmf_referrals 00:12:20.585 ************************************ 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:20.585 15:30:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.586 ************************************ 00:12:20.586 START TEST nvmf_connect_disconnect 00:12:20.586 ************************************ 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:20.586 * Looking for test storage... 00:12:20.586 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:20.586 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:20.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.846 --rc genhtml_branch_coverage=1 00:12:20.846 --rc genhtml_function_coverage=1 00:12:20.846 --rc genhtml_legend=1 00:12:20.846 --rc geninfo_all_blocks=1 00:12:20.846 --rc geninfo_unexecuted_blocks=1 00:12:20.846 00:12:20.846 ' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:20.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.846 --rc genhtml_branch_coverage=1 00:12:20.846 --rc genhtml_function_coverage=1 00:12:20.846 --rc genhtml_legend=1 00:12:20.846 --rc geninfo_all_blocks=1 00:12:20.846 --rc geninfo_unexecuted_blocks=1 00:12:20.846 00:12:20.846 ' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:20.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.846 --rc genhtml_branch_coverage=1 00:12:20.846 --rc genhtml_function_coverage=1 00:12:20.846 --rc genhtml_legend=1 00:12:20.846 --rc geninfo_all_blocks=1 00:12:20.846 --rc geninfo_unexecuted_blocks=1 00:12:20.846 00:12:20.846 ' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:20.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.846 --rc genhtml_branch_coverage=1 00:12:20.846 --rc genhtml_function_coverage=1 00:12:20.846 --rc genhtml_legend=1 00:12:20.846 --rc geninfo_all_blocks=1 00:12:20.846 --rc geninfo_unexecuted_blocks=1 00:12:20.846 00:12:20.846 ' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.846 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.847 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.847 15:30:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:27.408 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:27.408 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:27.408 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:27.409 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:27.409 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:27.409 15:31:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:27.409 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.409 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:27.409 altname enp217s0f0np0 00:12:27.409 altname ens818f0np0 00:12:27.409 inet 192.168.100.8/24 scope global mlx_0_0 00:12:27.409 valid_lft forever preferred_lft forever 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:27.409 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.409 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:27.409 altname enp217s0f1np1 00:12:27.409 altname ens818f1np1 00:12:27.409 inet 192.168.100.9/24 scope global mlx_0_1 00:12:27.409 valid_lft forever preferred_lft forever 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.409 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:27.410 192.168.100.9' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:27.410 192.168.100.9' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:27.410 192.168.100.9' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2203273 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2203273 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2203273 ']' 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.410 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 [2024-11-03 15:31:05.236447] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:27.668 [2024-11-03 15:31:05.236500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.668 [2024-11-03 15:31:05.314884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.668 [2024-11-03 15:31:05.337546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.668 [2024-11-03 15:31:05.337587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.668 [2024-11-03 15:31:05.337596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.668 [2024-11-03 15:31:05.337605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.668 [2024-11-03 15:31:05.337612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.668 [2024-11-03 15:31:05.339364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.668 [2024-11-03 15:31:05.339458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.668 [2024-11-03 15:31:05.339550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.668 [2024-11-03 15:31:05.339551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.668 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.668 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:27.668 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.668 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.668 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 [2024-11-03 15:31:05.486712] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:27.926 [2024-11-03 15:31:05.508114] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1259c50/0x125e100) succeed. 00:12:27.926 [2024-11-03 15:31:05.517389] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125b290/0x129f7a0) succeed. 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.926 [2024-11-03 15:31:05.667780] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:27.926 15:31:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:31.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:43.535 rmmod nvme_rdma 00:17:43.535 rmmod nvme_fabrics 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2203273 ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2203273 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2203273 ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2203273 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2203273 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2203273' 00:17:43.535 killing process with pid 2203273 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2203273 00:17:43.535 15:36:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2203273 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:43.535 00:17:43.535 real 5m22.972s 00:17:43.535 user 21m0.436s 00:17:43.535 sys 0m17.905s 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:43.535 ************************************ 00:17:43.535 END TEST nvmf_connect_disconnect 00:17:43.535 ************************************ 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.535 ************************************ 00:17:43.535 START TEST nvmf_multitarget 00:17:43.535 ************************************ 00:17:43.535 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:43.794 * Looking for test storage... 00:17:43.794 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.794 --rc genhtml_branch_coverage=1 00:17:43.794 --rc genhtml_function_coverage=1 00:17:43.794 --rc genhtml_legend=1 00:17:43.794 --rc geninfo_all_blocks=1 00:17:43.794 --rc geninfo_unexecuted_blocks=1 00:17:43.794 00:17:43.794 ' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.794 --rc genhtml_branch_coverage=1 00:17:43.794 --rc genhtml_function_coverage=1 00:17:43.794 --rc genhtml_legend=1 00:17:43.794 --rc geninfo_all_blocks=1 00:17:43.794 --rc geninfo_unexecuted_blocks=1 00:17:43.794 00:17:43.794 ' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.794 --rc genhtml_branch_coverage=1 00:17:43.794 --rc genhtml_function_coverage=1 00:17:43.794 --rc genhtml_legend=1 00:17:43.794 --rc geninfo_all_blocks=1 00:17:43.794 --rc geninfo_unexecuted_blocks=1 00:17:43.794 00:17:43.794 ' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.794 --rc genhtml_branch_coverage=1 00:17:43.794 --rc genhtml_function_coverage=1 00:17:43.794 --rc genhtml_legend=1 00:17:43.794 --rc geninfo_all_blocks=1 00:17:43.794 --rc geninfo_unexecuted_blocks=1 00:17:43.794 00:17:43.794 ' 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.794 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.795 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.795 15:36:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:50.359 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:50.359 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:50.359 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:50.359 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:50.359 15:36:27 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:50.359 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:50.360 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.360 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:50.360 altname enp217s0f0np0 00:17:50.360 altname ens818f0np0 00:17:50.360 inet 192.168.100.8/24 scope global mlx_0_0 00:17:50.360 valid_lft forever preferred_lft forever 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:50.360 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:50.360 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:50.360 altname enp217s0f1np1 00:17:50.360 altname ens818f1np1 00:17:50.360 inet 192.168.100.9/24 scope global mlx_0_1 00:17:50.360 valid_lft forever preferred_lft forever 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:50.360 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:50.619 192.168.100.9' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:50.619 192.168.100.9' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:50.619 192.168.100.9' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2262784 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2262784 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2262784 ']' 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.620 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:50.620 [2024-11-03 15:36:28.288605] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:17:50.620 [2024-11-03 15:36:28.288658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.620 [2024-11-03 15:36:28.366640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.620 [2024-11-03 15:36:28.389285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.620 [2024-11-03 15:36:28.389323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.620 [2024-11-03 15:36:28.389336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.620 [2024-11-03 15:36:28.389345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.620 [2024-11-03 15:36:28.389351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.620 [2024-11-03 15:36:28.391096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.620 [2024-11-03 15:36:28.391196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.620 [2024-11-03 15:36:28.391285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.620 [2024-11-03 15:36:28.391287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:50.878 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:51.136 "nvmf_tgt_1" 00:17:51.136 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:51.136 "nvmf_tgt_2" 00:17:51.136 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:51.136 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:51.395 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:51.395 15:36:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:51.395 true 00:17:51.395 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:51.395 true 00:17:51.395 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:51.654 rmmod nvme_rdma 00:17:51.654 rmmod nvme_fabrics 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2262784 ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2262784 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2262784 ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2262784 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2262784 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2262784' 00:17:51.654 killing process with pid 2262784 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2262784 00:17:51.654 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2262784 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:52.004 00:17:52.004 real 0m8.311s 00:17:52.004 user 0m7.312s 00:17:52.004 sys 0m5.658s 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:52.004 ************************************ 00:17:52.004 END TEST nvmf_multitarget 00:17:52.004 ************************************ 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.004 ************************************ 00:17:52.004 START TEST nvmf_rpc 00:17:52.004 ************************************ 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:52.004 * Looking for test storage... 00:17:52.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:52.004 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.334 --rc genhtml_branch_coverage=1 00:17:52.334 --rc genhtml_function_coverage=1 00:17:52.334 --rc genhtml_legend=1 00:17:52.334 --rc geninfo_all_blocks=1 00:17:52.334 --rc geninfo_unexecuted_blocks=1 00:17:52.334 00:17:52.334 ' 00:17:52.334 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.334 --rc genhtml_branch_coverage=1 00:17:52.334 --rc genhtml_function_coverage=1 00:17:52.335 --rc genhtml_legend=1 00:17:52.335 --rc geninfo_all_blocks=1 00:17:52.335 --rc geninfo_unexecuted_blocks=1 00:17:52.335 00:17:52.335 ' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:52.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.335 --rc genhtml_branch_coverage=1 00:17:52.335 --rc genhtml_function_coverage=1 00:17:52.335 --rc genhtml_legend=1 00:17:52.335 --rc geninfo_all_blocks=1 00:17:52.335 --rc geninfo_unexecuted_blocks=1 00:17:52.335 00:17:52.335 ' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:52.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.335 --rc genhtml_branch_coverage=1 00:17:52.335 --rc genhtml_function_coverage=1 00:17:52.335 --rc genhtml_legend=1 00:17:52.335 --rc geninfo_all_blocks=1 00:17:52.335 --rc geninfo_unexecuted_blocks=1 00:17:52.335 00:17:52.335 ' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.335 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.335 15:36:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:00.455 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:00.455 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:00.455 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:00.455 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:00.455 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:00.456 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.456 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:00.456 altname enp217s0f0np0 00:18:00.456 altname ens818f0np0 00:18:00.456 inet 192.168.100.8/24 scope global mlx_0_0 00:18:00.456 valid_lft forever preferred_lft forever 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:00.456 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.456 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:00.456 altname enp217s0f1np1 00:18:00.456 altname ens818f1np1 00:18:00.456 inet 192.168.100.9/24 scope global mlx_0_1 00:18:00.456 valid_lft forever preferred_lft forever 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:00.456 15:36:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:00.456 192.168.100.9' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:00.456 192.168.100.9' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:00.456 192.168.100.9' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2266374 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2266374 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2266374 ']' 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.456 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.456 [2024-11-03 15:36:37.146457] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:18:00.456 [2024-11-03 15:36:37.146512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.456 [2024-11-03 15:36:37.225409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.456 [2024-11-03 15:36:37.248188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.456 [2024-11-03 15:36:37.248227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.456 [2024-11-03 15:36:37.248236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.457 [2024-11-03 15:36:37.248245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.457 [2024-11-03 15:36:37.248252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.457 [2024-11-03 15:36:37.250026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.457 [2024-11-03 15:36:37.250046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.457 [2024-11-03 15:36:37.250129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.457 [2024-11-03 15:36:37.250131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:00.457 "tick_rate": 2500000000, 00:18:00.457 "poll_groups": [ 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_000", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_001", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_002", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_003", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [] 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 }' 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 [2024-11-03 15:36:37.536760] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xba0cb0/0xba5160) succeed. 00:18:00.457 [2024-11-03 15:36:37.545885] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xba22f0/0xbe6800) succeed. 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.457 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:00.457 "tick_rate": 2500000000, 00:18:00.457 "poll_groups": [ 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_000", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [ 00:18:00.457 { 00:18:00.457 "trtype": "RDMA", 00:18:00.457 "pending_data_buffer": 0, 00:18:00.457 "devices": [ 00:18:00.457 { 00:18:00.457 "name": "mlx5_0", 00:18:00.457 "polls": 15885, 00:18:00.457 "idle_polls": 15885, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "mlx5_1", 00:18:00.457 "polls": 15885, 00:18:00.457 "idle_polls": 15885, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_001", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [ 00:18:00.457 { 00:18:00.457 "trtype": "RDMA", 00:18:00.457 "pending_data_buffer": 0, 00:18:00.457 "devices": [ 00:18:00.457 { 00:18:00.457 "name": "mlx5_0", 00:18:00.457 "polls": 9846, 00:18:00.457 "idle_polls": 9846, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "mlx5_1", 00:18:00.457 "polls": 9846, 00:18:00.457 "idle_polls": 9846, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_002", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.457 "current_admin_qpairs": 0, 00:18:00.457 "current_io_qpairs": 0, 00:18:00.457 "pending_bdev_io": 0, 00:18:00.457 "completed_nvme_io": 0, 00:18:00.457 "transports": [ 00:18:00.457 { 00:18:00.457 "trtype": "RDMA", 00:18:00.457 "pending_data_buffer": 0, 00:18:00.457 "devices": [ 00:18:00.457 { 00:18:00.457 "name": "mlx5_0", 00:18:00.457 "polls": 5455, 00:18:00.457 "idle_polls": 5455, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "mlx5_1", 00:18:00.457 "polls": 5455, 00:18:00.457 "idle_polls": 5455, 00:18:00.457 "completions": 0, 00:18:00.457 "requests": 0, 00:18:00.457 "request_latency": 0, 00:18:00.457 "pending_free_request": 0, 00:18:00.457 "pending_rdma_read": 0, 00:18:00.457 "pending_rdma_write": 0, 00:18:00.457 "pending_rdma_send": 0, 00:18:00.457 "total_send_wrs": 0, 00:18:00.457 "send_doorbell_updates": 0, 00:18:00.457 "total_recv_wrs": 4096, 00:18:00.457 "recv_doorbell_updates": 1 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 } 00:18:00.457 ] 00:18:00.457 }, 00:18:00.457 { 00:18:00.457 "name": "nvmf_tgt_poll_group_003", 00:18:00.457 "admin_qpairs": 0, 00:18:00.457 "io_qpairs": 0, 00:18:00.458 "current_admin_qpairs": 0, 00:18:00.458 "current_io_qpairs": 0, 00:18:00.458 "pending_bdev_io": 0, 00:18:00.458 "completed_nvme_io": 0, 00:18:00.458 "transports": [ 00:18:00.458 { 00:18:00.458 "trtype": "RDMA", 00:18:00.458 "pending_data_buffer": 0, 00:18:00.458 "devices": [ 00:18:00.458 { 00:18:00.458 "name": "mlx5_0", 00:18:00.458 "polls": 898, 00:18:00.458 "idle_polls": 898, 00:18:00.458 "completions": 0, 00:18:00.458 "requests": 0, 00:18:00.458 "request_latency": 0, 00:18:00.458 "pending_free_request": 0, 00:18:00.458 "pending_rdma_read": 0, 00:18:00.458 "pending_rdma_write": 0, 00:18:00.458 "pending_rdma_send": 0, 00:18:00.458 "total_send_wrs": 0, 00:18:00.458 "send_doorbell_updates": 0, 00:18:00.458 "total_recv_wrs": 4096, 00:18:00.458 "recv_doorbell_updates": 1 00:18:00.458 }, 00:18:00.458 { 00:18:00.458 "name": "mlx5_1", 00:18:00.458 "polls": 898, 00:18:00.458 "idle_polls": 898, 00:18:00.458 "completions": 0, 00:18:00.458 "requests": 0, 00:18:00.458 "request_latency": 0, 00:18:00.458 "pending_free_request": 0, 00:18:00.458 "pending_rdma_read": 0, 00:18:00.458 "pending_rdma_write": 0, 00:18:00.458 "pending_rdma_send": 0, 00:18:00.458 "total_send_wrs": 0, 00:18:00.458 "send_doorbell_updates": 0, 00:18:00.458 "total_recv_wrs": 4096, 00:18:00.458 "recv_doorbell_updates": 1 00:18:00.458 } 00:18:00.458 ] 00:18:00.458 } 00:18:00.458 ] 00:18:00.458 } 00:18:00.458 ] 00:18:00.458 }' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 Malloc1 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 15:36:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 [2024-11-03 15:36:38.004731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:00.458 [2024-11-03 15:36:38.057038] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:00.458 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:00.458 could not add new controller: failed to write to nvme-fabrics device 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.458 15:36:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:01.395 15:36:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.395 15:36:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:01.395 15:36:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.395 15:36:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:01.395 15:36:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:03.928 15:36:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:04.495 [2024-11-03 15:36:42.148826] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:04.495 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:04.495 could not add new controller: failed to write to nvme-fabrics device 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.495 15:36:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:05.429 15:36:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:05.429 15:36:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:05.429 15:36:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.429 15:36:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:05.429 15:36:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:07.956 15:36:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.523 [2024-11-03 15:36:46.256637] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.523 15:36:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:09.897 15:36:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:09.897 15:36:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:09.897 15:36:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.897 15:36:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:09.897 15:36:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:11.796 15:36:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 [2024-11-03 15:36:50.327581] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.729 15:36:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:13.662 15:36:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:13.662 15:36:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:13.662 15:36:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.662 15:36:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:13.662 15:36:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:15.559 15:36:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 [2024-11-03 15:36:54.372278] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.933 15:36:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:17.866 15:36:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.866 15:36:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:17.866 15:36:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.866 15:36:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:17.866 15:36:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:19.766 15:36:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-11-03 15:36:58.418318] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.700 15:36:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:22.074 15:36:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:22.074 15:36:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:22.074 15:36:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.074 15:36:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:22.074 15:36:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:23.980 15:37:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 [2024-11-03 15:37:02.488271] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.914 15:37:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:25.847 15:37:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:25.847 15:37:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:25.847 15:37:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.847 15:37:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:25.847 15:37:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:27.746 15:37:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.679 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 [2024-11-03 15:37:06.526153] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 [2024-11-03 15:37:06.574351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 [2024-11-03 15:37:06.622522] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 [2024-11-03 15:37:06.670710] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.939 [2024-11-03 15:37:06.718884] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.939 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.197 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:29.197 "tick_rate": 2500000000, 00:18:29.197 "poll_groups": [ 00:18:29.197 { 00:18:29.197 "name": "nvmf_tgt_poll_group_000", 00:18:29.197 "admin_qpairs": 2, 00:18:29.197 "io_qpairs": 27, 00:18:29.197 "current_admin_qpairs": 0, 00:18:29.197 "current_io_qpairs": 0, 00:18:29.197 "pending_bdev_io": 0, 00:18:29.197 "completed_nvme_io": 126, 00:18:29.197 "transports": [ 00:18:29.197 { 00:18:29.197 "trtype": "RDMA", 00:18:29.198 "pending_data_buffer": 0, 00:18:29.198 "devices": [ 00:18:29.198 { 00:18:29.198 "name": "mlx5_0", 00:18:29.198 "polls": 3619663, 00:18:29.198 "idle_polls": 3619345, 00:18:29.198 "completions": 359, 00:18:29.198 "requests": 179, 00:18:29.198 "request_latency": 37440650, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 303, 00:18:29.198 "send_doorbell_updates": 156, 00:18:29.198 "total_recv_wrs": 4275, 00:18:29.198 "recv_doorbell_updates": 156 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "mlx5_1", 00:18:29.198 "polls": 3619663, 00:18:29.198 "idle_polls": 3619663, 00:18:29.198 "completions": 0, 00:18:29.198 "requests": 0, 00:18:29.198 "request_latency": 0, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 0, 00:18:29.198 "send_doorbell_updates": 0, 00:18:29.198 "total_recv_wrs": 4096, 00:18:29.198 "recv_doorbell_updates": 1 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "nvmf_tgt_poll_group_001", 00:18:29.198 "admin_qpairs": 2, 00:18:29.198 "io_qpairs": 26, 00:18:29.198 "current_admin_qpairs": 0, 00:18:29.198 "current_io_qpairs": 0, 00:18:29.198 "pending_bdev_io": 0, 00:18:29.198 "completed_nvme_io": 126, 00:18:29.198 "transports": [ 00:18:29.198 { 00:18:29.198 "trtype": "RDMA", 00:18:29.198 "pending_data_buffer": 0, 00:18:29.198 "devices": [ 00:18:29.198 { 00:18:29.198 "name": "mlx5_0", 00:18:29.198 "polls": 3536687, 00:18:29.198 "idle_polls": 3536366, 00:18:29.198 "completions": 360, 00:18:29.198 "requests": 180, 00:18:29.198 "request_latency": 35691434, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 306, 00:18:29.198 "send_doorbell_updates": 158, 00:18:29.198 "total_recv_wrs": 4276, 00:18:29.198 "recv_doorbell_updates": 159 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "mlx5_1", 00:18:29.198 "polls": 3536687, 00:18:29.198 "idle_polls": 3536687, 00:18:29.198 "completions": 0, 00:18:29.198 "requests": 0, 00:18:29.198 "request_latency": 0, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 0, 00:18:29.198 "send_doorbell_updates": 0, 00:18:29.198 "total_recv_wrs": 4096, 00:18:29.198 "recv_doorbell_updates": 1 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "nvmf_tgt_poll_group_002", 00:18:29.198 "admin_qpairs": 1, 00:18:29.198 "io_qpairs": 26, 00:18:29.198 "current_admin_qpairs": 0, 00:18:29.198 "current_io_qpairs": 0, 00:18:29.198 "pending_bdev_io": 0, 00:18:29.198 "completed_nvme_io": 126, 00:18:29.198 "transports": [ 00:18:29.198 { 00:18:29.198 "trtype": "RDMA", 00:18:29.198 "pending_data_buffer": 0, 00:18:29.198 "devices": [ 00:18:29.198 { 00:18:29.198 "name": "mlx5_0", 00:18:29.198 "polls": 3603702, 00:18:29.198 "idle_polls": 3603435, 00:18:29.198 "completions": 307, 00:18:29.198 "requests": 153, 00:18:29.198 "request_latency": 33904384, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 266, 00:18:29.198 "send_doorbell_updates": 130, 00:18:29.198 "total_recv_wrs": 4249, 00:18:29.198 "recv_doorbell_updates": 130 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "mlx5_1", 00:18:29.198 "polls": 3603702, 00:18:29.198 "idle_polls": 3603702, 00:18:29.198 "completions": 0, 00:18:29.198 "requests": 0, 00:18:29.198 "request_latency": 0, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 0, 00:18:29.198 "send_doorbell_updates": 0, 00:18:29.198 "total_recv_wrs": 4096, 00:18:29.198 "recv_doorbell_updates": 1 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "nvmf_tgt_poll_group_003", 00:18:29.198 "admin_qpairs": 2, 00:18:29.198 "io_qpairs": 26, 00:18:29.198 "current_admin_qpairs": 0, 00:18:29.198 "current_io_qpairs": 0, 00:18:29.198 "pending_bdev_io": 0, 00:18:29.198 "completed_nvme_io": 77, 00:18:29.198 "transports": [ 00:18:29.198 { 00:18:29.198 "trtype": "RDMA", 00:18:29.198 "pending_data_buffer": 0, 00:18:29.198 "devices": [ 00:18:29.198 { 00:18:29.198 "name": "mlx5_0", 00:18:29.198 "polls": 2820254, 00:18:29.198 "idle_polls": 2820014, 00:18:29.198 "completions": 260, 00:18:29.198 "requests": 130, 00:18:29.198 "request_latency": 23986146, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 206, 00:18:29.198 "send_doorbell_updates": 118, 00:18:29.198 "total_recv_wrs": 4226, 00:18:29.198 "recv_doorbell_updates": 119 00:18:29.198 }, 00:18:29.198 { 00:18:29.198 "name": "mlx5_1", 00:18:29.198 "polls": 2820254, 00:18:29.198 "idle_polls": 2820254, 00:18:29.198 "completions": 0, 00:18:29.198 "requests": 0, 00:18:29.198 "request_latency": 0, 00:18:29.198 "pending_free_request": 0, 00:18:29.198 "pending_rdma_read": 0, 00:18:29.198 "pending_rdma_write": 0, 00:18:29.198 "pending_rdma_send": 0, 00:18:29.198 "total_send_wrs": 0, 00:18:29.198 "send_doorbell_updates": 0, 00:18:29.198 "total_recv_wrs": 4096, 00:18:29.198 "recv_doorbell_updates": 1 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 } 00:18:29.198 ] 00:18:29.198 }' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:18:29.198 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 131022614 > 0 )) 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:29.457 15:37:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:29.457 rmmod nvme_rdma 00:18:29.457 rmmod nvme_fabrics 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2266374 ']' 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2266374 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2266374 ']' 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2266374 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2266374 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2266374' 00:18:29.457 killing process with pid 2266374 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2266374 00:18:29.457 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2266374 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:29.716 00:18:29.716 real 0m37.718s 00:18:29.716 user 2m2.292s 00:18:29.716 sys 0m7.287s 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.716 ************************************ 00:18:29.716 END TEST nvmf_rpc 00:18:29.716 ************************************ 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:29.716 ************************************ 00:18:29.716 START TEST nvmf_invalid 00:18:29.716 ************************************ 00:18:29.716 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:29.975 * Looking for test storage... 00:18:29.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.975 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.976 --rc genhtml_branch_coverage=1 00:18:29.976 --rc genhtml_function_coverage=1 00:18:29.976 --rc genhtml_legend=1 00:18:29.976 --rc geninfo_all_blocks=1 00:18:29.976 --rc geninfo_unexecuted_blocks=1 00:18:29.976 00:18:29.976 ' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.976 --rc genhtml_branch_coverage=1 00:18:29.976 --rc genhtml_function_coverage=1 00:18:29.976 --rc genhtml_legend=1 00:18:29.976 --rc geninfo_all_blocks=1 00:18:29.976 --rc geninfo_unexecuted_blocks=1 00:18:29.976 00:18:29.976 ' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.976 --rc genhtml_branch_coverage=1 00:18:29.976 --rc genhtml_function_coverage=1 00:18:29.976 --rc genhtml_legend=1 00:18:29.976 --rc geninfo_all_blocks=1 00:18:29.976 --rc geninfo_unexecuted_blocks=1 00:18:29.976 00:18:29.976 ' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:29.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.976 --rc genhtml_branch_coverage=1 00:18:29.976 --rc genhtml_function_coverage=1 00:18:29.976 --rc genhtml_legend=1 00:18:29.976 --rc geninfo_all_blocks=1 00:18:29.976 --rc geninfo_unexecuted_blocks=1 00:18:29.976 00:18:29.976 ' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:29.976 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.976 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:29.977 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:29.977 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:29.977 15:37:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:36.542 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:36.542 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:36.542 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:36.542 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.542 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:36.543 15:37:13 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:36.543 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.543 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:36.543 altname enp217s0f0np0 00:18:36.543 altname ens818f0np0 00:18:36.543 inet 192.168.100.8/24 scope global mlx_0_0 00:18:36.543 valid_lft forever preferred_lft forever 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:36.543 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.543 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:36.543 altname enp217s0f1np1 00:18:36.543 altname ens818f1np1 00:18:36.543 inet 192.168.100.9/24 scope global mlx_0_1 00:18:36.543 valid_lft forever preferred_lft forever 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:36.543 192.168.100.9' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:36.543 192.168.100.9' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:36.543 192.168.100.9' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:36.543 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2274913 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2274913 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2274913 ']' 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.544 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.544 [2024-11-03 15:37:14.210091] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:18:36.544 [2024-11-03 15:37:14.210144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.544 [2024-11-03 15:37:14.286112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.544 [2024-11-03 15:37:14.308145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.544 [2024-11-03 15:37:14.308187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.544 [2024-11-03 15:37:14.308197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.544 [2024-11-03 15:37:14.308205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.544 [2024-11-03 15:37:14.308228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.544 [2024-11-03 15:37:14.309799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.544 [2024-11-03 15:37:14.309893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.544 [2024-11-03 15:37:14.309987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.544 [2024-11-03 15:37:14.309989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:36.803 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16233 00:18:37.062 [2024-11-03 15:37:14.629734] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:37.062 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:37.062 { 00:18:37.062 "nqn": "nqn.2016-06.io.spdk:cnode16233", 00:18:37.062 "tgt_name": "foobar", 00:18:37.062 "method": "nvmf_create_subsystem", 00:18:37.062 "req_id": 1 00:18:37.062 } 00:18:37.062 Got JSON-RPC error response 00:18:37.062 response: 00:18:37.062 { 00:18:37.062 "code": -32603, 00:18:37.062 "message": "Unable to find target foobar" 00:18:37.062 }' 00:18:37.062 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:37.062 { 00:18:37.062 "nqn": "nqn.2016-06.io.spdk:cnode16233", 00:18:37.062 "tgt_name": "foobar", 00:18:37.062 "method": "nvmf_create_subsystem", 00:18:37.062 "req_id": 1 00:18:37.062 } 00:18:37.062 Got JSON-RPC error response 00:18:37.062 response: 00:18:37.062 { 00:18:37.062 "code": -32603, 00:18:37.062 "message": "Unable to find target foobar" 00:18:37.062 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:37.062 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:37.062 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29768 00:18:37.062 [2024-11-03 15:37:14.842468] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29768: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:37.321 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:37.321 { 00:18:37.321 "nqn": "nqn.2016-06.io.spdk:cnode29768", 00:18:37.321 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:37.321 "method": "nvmf_create_subsystem", 00:18:37.321 "req_id": 1 00:18:37.321 } 00:18:37.321 Got JSON-RPC error response 00:18:37.321 response: 00:18:37.321 { 00:18:37.321 "code": -32602, 00:18:37.321 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:37.321 }' 00:18:37.321 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:37.321 { 00:18:37.321 "nqn": "nqn.2016-06.io.spdk:cnode29768", 00:18:37.321 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:37.321 "method": "nvmf_create_subsystem", 00:18:37.321 "req_id": 1 00:18:37.321 } 00:18:37.321 Got JSON-RPC error response 00:18:37.321 response: 00:18:37.321 { 00:18:37.321 "code": -32602, 00:18:37.321 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:37.321 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:37.321 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:37.321 15:37:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5833 00:18:37.321 [2024-11-03 15:37:15.059187] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5833: invalid model number 'SPDK_Controller' 00:18:37.321 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:37.321 { 00:18:37.321 "nqn": "nqn.2016-06.io.spdk:cnode5833", 00:18:37.321 "model_number": "SPDK_Controller\u001f", 00:18:37.321 "method": "nvmf_create_subsystem", 00:18:37.321 "req_id": 1 00:18:37.321 } 00:18:37.321 Got JSON-RPC error response 00:18:37.321 response: 00:18:37.321 { 00:18:37.321 "code": -32602, 00:18:37.321 "message": "Invalid MN SPDK_Controller\u001f" 00:18:37.321 }' 00:18:37.321 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:37.321 { 00:18:37.321 "nqn": "nqn.2016-06.io.spdk:cnode5833", 00:18:37.321 "model_number": "SPDK_Controller\u001f", 00:18:37.321 "method": "nvmf_create_subsystem", 00:18:37.321 "req_id": 1 00:18:37.321 } 00:18:37.321 Got JSON-RPC error response 00:18:37.321 response: 00:18:37.321 { 00:18:37.321 "code": -32602, 00:18:37.321 "message": "Invalid MN SPDK_Controller\u001f" 00:18:37.321 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:37.321 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:37.321 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:37.322 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.581 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@17Lz%zJa1wCPcBnu\5Q|' 00:18:37.582 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '@17Lz%zJa1wCPcBnu\5Q|' nqn.2016-06.io.spdk:cnode27805 00:18:37.842 [2024-11-03 15:37:15.428479] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27805: invalid serial number '@17Lz%zJa1wCPcBnu\5Q|' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:37.842 { 00:18:37.842 "nqn": "nqn.2016-06.io.spdk:cnode27805", 00:18:37.842 "serial_number": "@17Lz%zJa1wCPcBnu\\5Q|", 00:18:37.842 "method": "nvmf_create_subsystem", 00:18:37.842 "req_id": 1 00:18:37.842 } 00:18:37.842 Got JSON-RPC error response 00:18:37.842 response: 00:18:37.842 { 00:18:37.842 "code": -32602, 00:18:37.842 "message": "Invalid SN @17Lz%zJa1wCPcBnu\\5Q|" 00:18:37.842 }' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:37.842 { 00:18:37.842 "nqn": "nqn.2016-06.io.spdk:cnode27805", 00:18:37.842 "serial_number": "@17Lz%zJa1wCPcBnu\\5Q|", 00:18:37.842 "method": "nvmf_create_subsystem", 00:18:37.842 "req_id": 1 00:18:37.842 } 00:18:37.842 Got JSON-RPC error response 00:18:37.842 response: 00:18:37.842 { 00:18:37.842 "code": -32602, 00:18:37.842 "message": "Invalid SN @17Lz%zJa1wCPcBnu\\5Q|" 00:18:37.842 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.842 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:37.843 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:38.102 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:38.102 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.102 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.102 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nVZ[9b1rVj-mAS]8%;D$^A'\''kwbvVexqH7f{fZ[,=l' 00:18:38.103 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'nVZ[9b1rVj-mAS]8%;D$^A'\''kwbvVexqH7f{fZ[,=l' nqn.2016-06.io.spdk:cnode16787 00:18:38.362 [2024-11-03 15:37:15.962264] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16787: invalid model number 'nVZ[9b1rVj-mAS]8%;D$^A'kwbvVexqH7f{fZ[,=l' 00:18:38.363 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:38.363 { 00:18:38.363 "nqn": "nqn.2016-06.io.spdk:cnode16787", 00:18:38.363 "model_number": "nVZ[9b1rVj-mAS]8%;D$^A'\''kwbvVexqH7f{fZ[,=l", 00:18:38.363 "method": "nvmf_create_subsystem", 00:18:38.363 "req_id": 1 00:18:38.363 } 00:18:38.363 Got JSON-RPC error response 00:18:38.363 response: 00:18:38.363 { 00:18:38.363 "code": -32602, 00:18:38.363 "message": "Invalid MN nVZ[9b1rVj-mAS]8%;D$^A'\''kwbvVexqH7f{fZ[,=l" 00:18:38.363 }' 00:18:38.363 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:38.363 { 00:18:38.363 "nqn": "nqn.2016-06.io.spdk:cnode16787", 00:18:38.363 "model_number": "nVZ[9b1rVj-mAS]8%;D$^A'kwbvVexqH7f{fZ[,=l", 00:18:38.363 "method": "nvmf_create_subsystem", 00:18:38.363 "req_id": 1 00:18:38.363 } 00:18:38.363 Got JSON-RPC error response 00:18:38.363 response: 00:18:38.363 { 00:18:38.363 "code": -32602, 00:18:38.363 "message": "Invalid MN nVZ[9b1rVj-mAS]8%;D$^A'kwbvVexqH7f{fZ[,=l" 00:18:38.363 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:38.363 15:37:15 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:18:38.622 [2024-11-03 15:37:16.189223] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20e5860/0x20e9d30) succeed. 00:18:38.622 [2024-11-03 15:37:16.198143] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20e6ea0/0x212b3d0) succeed. 00:18:38.622 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:38.880 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:18:38.880 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:18:38.880 192.168.100.9' 00:18:38.880 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:38.880 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:18:38.880 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:18:39.139 [2024-11-03 15:37:16.759660] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:39.139 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:39.139 { 00:18:39.139 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:39.139 "listen_address": { 00:18:39.139 "trtype": "rdma", 00:18:39.139 "traddr": "192.168.100.8", 00:18:39.139 "trsvcid": "4421" 00:18:39.139 }, 00:18:39.139 "method": "nvmf_subsystem_remove_listener", 00:18:39.139 "req_id": 1 00:18:39.139 } 00:18:39.139 Got JSON-RPC error response 00:18:39.139 response: 00:18:39.139 { 00:18:39.139 "code": -32602, 00:18:39.139 "message": "Invalid parameters" 00:18:39.139 }' 00:18:39.139 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:39.139 { 00:18:39.139 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:39.139 "listen_address": { 00:18:39.139 "trtype": "rdma", 00:18:39.139 "traddr": "192.168.100.8", 00:18:39.139 "trsvcid": "4421" 00:18:39.139 }, 00:18:39.139 "method": "nvmf_subsystem_remove_listener", 00:18:39.139 "req_id": 1 00:18:39.139 } 00:18:39.139 Got JSON-RPC error response 00:18:39.139 response: 00:18:39.139 { 00:18:39.139 "code": -32602, 00:18:39.139 "message": "Invalid parameters" 00:18:39.139 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:39.139 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10581 -i 0 00:18:39.398 [2024-11-03 15:37:16.960312] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10581: invalid cntlid range [0-65519] 00:18:39.398 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:39.398 { 00:18:39.398 "nqn": "nqn.2016-06.io.spdk:cnode10581", 00:18:39.398 "min_cntlid": 0, 00:18:39.398 "method": "nvmf_create_subsystem", 00:18:39.398 "req_id": 1 00:18:39.398 } 00:18:39.398 Got JSON-RPC error response 00:18:39.398 response: 00:18:39.398 { 00:18:39.398 "code": -32602, 00:18:39.398 "message": "Invalid cntlid range [0-65519]" 00:18:39.398 }' 00:18:39.398 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:39.398 { 00:18:39.398 "nqn": "nqn.2016-06.io.spdk:cnode10581", 00:18:39.398 "min_cntlid": 0, 00:18:39.398 "method": "nvmf_create_subsystem", 00:18:39.398 "req_id": 1 00:18:39.398 } 00:18:39.398 Got JSON-RPC error response 00:18:39.398 response: 00:18:39.398 { 00:18:39.398 "code": -32602, 00:18:39.398 "message": "Invalid cntlid range [0-65519]" 00:18:39.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:39.398 15:37:16 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27033 -i 65520 00:18:39.398 [2024-11-03 15:37:17.161069] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27033: invalid cntlid range [65520-65519] 00:18:39.657 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:39.657 { 00:18:39.657 "nqn": "nqn.2016-06.io.spdk:cnode27033", 00:18:39.657 "min_cntlid": 65520, 00:18:39.657 "method": "nvmf_create_subsystem", 00:18:39.657 "req_id": 1 00:18:39.657 } 00:18:39.657 Got JSON-RPC error response 00:18:39.657 response: 00:18:39.657 { 00:18:39.657 "code": -32602, 00:18:39.657 "message": "Invalid cntlid range [65520-65519]" 00:18:39.657 }' 00:18:39.657 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:39.657 { 00:18:39.657 "nqn": "nqn.2016-06.io.spdk:cnode27033", 00:18:39.657 "min_cntlid": 65520, 00:18:39.657 "method": "nvmf_create_subsystem", 00:18:39.657 "req_id": 1 00:18:39.657 } 00:18:39.657 Got JSON-RPC error response 00:18:39.657 response: 00:18:39.657 { 00:18:39.657 "code": -32602, 00:18:39.657 "message": "Invalid cntlid range [65520-65519]" 00:18:39.657 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:39.658 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5797 -I 0 00:18:39.658 [2024-11-03 15:37:17.369818] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5797: invalid cntlid range [1-0] 00:18:39.658 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:39.658 { 00:18:39.658 "nqn": "nqn.2016-06.io.spdk:cnode5797", 00:18:39.658 "max_cntlid": 0, 00:18:39.658 "method": "nvmf_create_subsystem", 00:18:39.658 "req_id": 1 00:18:39.658 } 00:18:39.658 Got JSON-RPC error response 00:18:39.658 response: 00:18:39.658 { 00:18:39.658 "code": -32602, 00:18:39.658 "message": "Invalid cntlid range [1-0]" 00:18:39.658 }' 00:18:39.658 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:39.658 { 00:18:39.658 "nqn": "nqn.2016-06.io.spdk:cnode5797", 00:18:39.658 "max_cntlid": 0, 00:18:39.658 "method": "nvmf_create_subsystem", 00:18:39.658 "req_id": 1 00:18:39.658 } 00:18:39.658 Got JSON-RPC error response 00:18:39.658 response: 00:18:39.658 { 00:18:39.658 "code": -32602, 00:18:39.658 "message": "Invalid cntlid range [1-0]" 00:18:39.658 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:39.658 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25667 -I 65520 00:18:39.917 [2024-11-03 15:37:17.570555] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25667: invalid cntlid range [1-65520] 00:18:39.917 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:39.917 { 00:18:39.917 "nqn": "nqn.2016-06.io.spdk:cnode25667", 00:18:39.917 "max_cntlid": 65520, 00:18:39.917 "method": "nvmf_create_subsystem", 00:18:39.917 "req_id": 1 00:18:39.917 } 00:18:39.917 Got JSON-RPC error response 00:18:39.917 response: 00:18:39.917 { 00:18:39.917 "code": -32602, 00:18:39.917 "message": "Invalid cntlid range [1-65520]" 00:18:39.917 }' 00:18:39.917 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:39.917 { 00:18:39.917 "nqn": "nqn.2016-06.io.spdk:cnode25667", 00:18:39.917 "max_cntlid": 65520, 00:18:39.917 "method": "nvmf_create_subsystem", 00:18:39.917 "req_id": 1 00:18:39.917 } 00:18:39.917 Got JSON-RPC error response 00:18:39.917 response: 00:18:39.917 { 00:18:39.917 "code": -32602, 00:18:39.917 "message": "Invalid cntlid range [1-65520]" 00:18:39.917 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:39.917 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12589 -i 6 -I 5 00:18:40.176 [2024-11-03 15:37:17.779324] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12589: invalid cntlid range [6-5] 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:40.176 { 00:18:40.176 "nqn": "nqn.2016-06.io.spdk:cnode12589", 00:18:40.176 "min_cntlid": 6, 00:18:40.176 "max_cntlid": 5, 00:18:40.176 "method": "nvmf_create_subsystem", 00:18:40.176 "req_id": 1 00:18:40.176 } 00:18:40.176 Got JSON-RPC error response 00:18:40.176 response: 00:18:40.176 { 00:18:40.176 "code": -32602, 00:18:40.176 "message": "Invalid cntlid range [6-5]" 00:18:40.176 }' 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:40.176 { 00:18:40.176 "nqn": "nqn.2016-06.io.spdk:cnode12589", 00:18:40.176 "min_cntlid": 6, 00:18:40.176 "max_cntlid": 5, 00:18:40.176 "method": "nvmf_create_subsystem", 00:18:40.176 "req_id": 1 00:18:40.176 } 00:18:40.176 Got JSON-RPC error response 00:18:40.176 response: 00:18:40.176 { 00:18:40.176 "code": -32602, 00:18:40.176 "message": "Invalid cntlid range [6-5]" 00:18:40.176 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:40.176 { 00:18:40.176 "name": "foobar", 00:18:40.176 "method": "nvmf_delete_target", 00:18:40.176 "req_id": 1 00:18:40.176 } 00:18:40.176 Got JSON-RPC error response 00:18:40.176 response: 00:18:40.176 { 00:18:40.176 "code": -32602, 00:18:40.176 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:40.176 }' 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:40.176 { 00:18:40.176 "name": "foobar", 00:18:40.176 "method": "nvmf_delete_target", 00:18:40.176 "req_id": 1 00:18:40.176 } 00:18:40.176 Got JSON-RPC error response 00:18:40.176 response: 00:18:40.176 { 00:18:40.176 "code": -32602, 00:18:40.176 "message": "The specified target doesn't exist, cannot delete it." 00:18:40.176 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.176 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:40.176 rmmod nvme_rdma 00:18:40.176 rmmod nvme_fabrics 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2274913 ']' 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2274913 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 2274913 ']' 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 2274913 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.436 15:37:17 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2274913 00:18:40.436 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:40.436 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:40.436 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2274913' 00:18:40.436 killing process with pid 2274913 00:18:40.436 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 2274913 00:18:40.436 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 2274913 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:40.695 00:18:40.695 real 0m10.847s 00:18:40.695 user 0m19.907s 00:18:40.695 sys 0m6.149s 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:40.695 ************************************ 00:18:40.695 END TEST nvmf_invalid 00:18:40.695 ************************************ 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.695 ************************************ 00:18:40.695 START TEST nvmf_connect_stress 00:18:40.695 ************************************ 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:18:40.695 * Looking for test storage... 00:18:40.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:40.695 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.955 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.956 --rc genhtml_branch_coverage=1 00:18:40.956 --rc genhtml_function_coverage=1 00:18:40.956 --rc genhtml_legend=1 00:18:40.956 --rc geninfo_all_blocks=1 00:18:40.956 --rc geninfo_unexecuted_blocks=1 00:18:40.956 00:18:40.956 ' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.956 --rc genhtml_branch_coverage=1 00:18:40.956 --rc genhtml_function_coverage=1 00:18:40.956 --rc genhtml_legend=1 00:18:40.956 --rc geninfo_all_blocks=1 00:18:40.956 --rc geninfo_unexecuted_blocks=1 00:18:40.956 00:18:40.956 ' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.956 --rc genhtml_branch_coverage=1 00:18:40.956 --rc genhtml_function_coverage=1 00:18:40.956 --rc genhtml_legend=1 00:18:40.956 --rc geninfo_all_blocks=1 00:18:40.956 --rc geninfo_unexecuted_blocks=1 00:18:40.956 00:18:40.956 ' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:40.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.956 --rc genhtml_branch_coverage=1 00:18:40.956 --rc genhtml_function_coverage=1 00:18:40.956 --rc genhtml_legend=1 00:18:40.956 --rc geninfo_all_blocks=1 00:18:40.956 --rc geninfo_unexecuted_blocks=1 00:18:40.956 00:18:40.956 ' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.956 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.956 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:40.957 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:40.957 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.957 15:37:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.528 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:47.529 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:47.529 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:47.529 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:47.529 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:47.529 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:47.793 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:47.793 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:47.793 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:47.793 altname enp217s0f0np0 00:18:47.793 altname ens818f0np0 00:18:47.793 inet 192.168.100.8/24 scope global mlx_0_0 00:18:47.793 valid_lft forever preferred_lft forever 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:47.794 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:47.794 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:47.794 altname enp217s0f1np1 00:18:47.794 altname ens818f1np1 00:18:47.794 inet 192.168.100.9/24 scope global mlx_0_1 00:18:47.794 valid_lft forever preferred_lft forever 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:47.794 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:47.795 192.168.100.9' 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:47.795 192.168.100.9' 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:47.795 192.168.100.9' 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:18:47.795 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2279057 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2279057 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2279057 ']' 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.796 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.097 [2024-11-03 15:37:25.587593] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:18:48.097 [2024-11-03 15:37:25.587656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.097 [2024-11-03 15:37:25.668180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.097 [2024-11-03 15:37:25.690572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.097 [2024-11-03 15:37:25.690615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.097 [2024-11-03 15:37:25.690625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.097 [2024-11-03 15:37:25.690634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.097 [2024-11-03 15:37:25.690658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.097 [2024-11-03 15:37:25.692151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.097 [2024-11-03 15:37:25.692258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.097 [2024-11-03 15:37:25.692260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.097 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.097 [2024-11-03 15:37:25.863829] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22213d0/0x2225880) succeed. 00:18:48.097 [2024-11-03 15:37:25.873004] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2222970/0x2266f20) succeed. 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.408 [2024-11-03 15:37:25.983049] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.408 NULL1 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2279281 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:48.408 15:37:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.408 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.409 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.695 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.695 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:48.695 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.695 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.695 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.264 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.264 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:49.264 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.264 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.264 15:37:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.523 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.523 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:49.523 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.523 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.523 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.782 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.782 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:49.782 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.782 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.782 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.041 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.041 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:50.041 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.041 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.041 15:37:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.300 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.300 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:50.300 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.300 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.300 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.868 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.868 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:50.868 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.868 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.868 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.127 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.127 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:51.127 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.127 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.127 15:37:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.386 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.386 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:51.386 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.386 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.386 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.645 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.645 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:51.645 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.645 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.645 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.212 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.212 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:52.212 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.212 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.212 15:37:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.471 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.471 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:52.471 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.471 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.471 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.731 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.731 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:52.731 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.731 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.731 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.990 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.990 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:52.990 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.990 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.990 15:37:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.248 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.248 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:53.248 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.248 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.248 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.816 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.816 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:53.816 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.816 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.816 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.075 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.075 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:54.075 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.075 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.075 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.334 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.334 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:54.334 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.334 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.334 15:37:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.593 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.593 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:54.593 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.593 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.593 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.161 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.161 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:55.161 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.161 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.161 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.420 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.420 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:55.420 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.420 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.420 15:37:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.679 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.679 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:55.679 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.679 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.679 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.938 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.938 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:55.938 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.938 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.938 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.197 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.197 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:56.197 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.197 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.197 15:37:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.765 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.765 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:56.765 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.765 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.765 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.024 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.025 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:57.025 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.025 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.025 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.284 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.284 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:57.284 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.284 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.285 15:37:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.544 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.544 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:57.544 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.544 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.544 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.805 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.805 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:57.805 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.805 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.805 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.376 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.376 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:58.376 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.376 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.376 15:37:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.376 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279281 00:18:58.636 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2279281) - No such process 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2279281 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:58.636 rmmod nvme_rdma 00:18:58.636 rmmod nvme_fabrics 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2279057 ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2279057 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2279057 ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2279057 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2279057 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2279057' 00:18:58.636 killing process with pid 2279057 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2279057 00:18:58.636 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2279057 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:58.897 00:18:58.897 real 0m18.229s 00:18:58.897 user 0m39.981s 00:18:58.897 sys 0m7.881s 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.897 ************************************ 00:18:58.897 END TEST nvmf_connect_stress 00:18:58.897 ************************************ 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.897 ************************************ 00:18:58.897 START TEST nvmf_fused_ordering 00:18:58.897 ************************************ 00:18:58.897 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:59.159 * Looking for test storage... 00:18:59.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:59.159 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.160 --rc genhtml_branch_coverage=1 00:18:59.160 --rc genhtml_function_coverage=1 00:18:59.160 --rc genhtml_legend=1 00:18:59.160 --rc geninfo_all_blocks=1 00:18:59.160 --rc geninfo_unexecuted_blocks=1 00:18:59.160 00:18:59.160 ' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.160 --rc genhtml_branch_coverage=1 00:18:59.160 --rc genhtml_function_coverage=1 00:18:59.160 --rc genhtml_legend=1 00:18:59.160 --rc geninfo_all_blocks=1 00:18:59.160 --rc geninfo_unexecuted_blocks=1 00:18:59.160 00:18:59.160 ' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.160 --rc genhtml_branch_coverage=1 00:18:59.160 --rc genhtml_function_coverage=1 00:18:59.160 --rc genhtml_legend=1 00:18:59.160 --rc geninfo_all_blocks=1 00:18:59.160 --rc geninfo_unexecuted_blocks=1 00:18:59.160 00:18:59.160 ' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.160 --rc genhtml_branch_coverage=1 00:18:59.160 --rc genhtml_function_coverage=1 00:18:59.160 --rc genhtml_legend=1 00:18:59.160 --rc geninfo_all_blocks=1 00:18:59.160 --rc geninfo_unexecuted_blocks=1 00:18:59.160 00:18:59.160 ' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.160 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.161 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.161 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.161 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.161 15:37:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:07.293 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:07.293 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:07.293 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:07.293 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.293 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:07.294 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.294 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:07.294 altname enp217s0f0np0 00:19:07.294 altname ens818f0np0 00:19:07.294 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.294 valid_lft forever preferred_lft forever 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:07.294 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.294 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:07.294 altname enp217s0f1np1 00:19:07.294 altname ens818f1np1 00:19:07.294 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.294 valid_lft forever preferred_lft forever 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.294 192.168.100.9' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:07.294 192.168.100.9' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:07.294 192.168.100.9' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2284377 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2284377 00:19:07.294 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2284377 ']' 00:19:07.295 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.295 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.295 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.295 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.295 15:37:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 [2024-11-03 15:37:43.960543] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:19:07.295 [2024-11-03 15:37:43.960604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.295 [2024-11-03 15:37:44.039774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.295 [2024-11-03 15:37:44.061480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.295 [2024-11-03 15:37:44.061518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.295 [2024-11-03 15:37:44.061528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.295 [2024-11-03 15:37:44.061536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.295 [2024-11-03 15:37:44.061543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.295 [2024-11-03 15:37:44.062129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 [2024-11-03 15:37:44.221270] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1487f50/0x148c400) succeed. 00:19:07.295 [2024-11-03 15:37:44.230159] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14893b0/0x14cdaa0) succeed. 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 [2024-11-03 15:37:44.276481] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 NULL1 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.295 15:37:44 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:07.295 [2024-11-03 15:37:44.333043] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:19:07.295 [2024-11-03 15:37:44.333079] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284423 ] 00:19:07.295 Attached to nqn.2016-06.io.spdk:cnode1 00:19:07.295 Namespace ID: 1 size: 1GB 00:19:07.295 fused_ordering(0) 00:19:07.295 fused_ordering(1) 00:19:07.295 fused_ordering(2) 00:19:07.295 fused_ordering(3) 00:19:07.295 fused_ordering(4) 00:19:07.295 fused_ordering(5) 00:19:07.295 fused_ordering(6) 00:19:07.295 fused_ordering(7) 00:19:07.295 fused_ordering(8) 00:19:07.295 fused_ordering(9) 00:19:07.295 fused_ordering(10) 00:19:07.295 fused_ordering(11) 00:19:07.295 fused_ordering(12) 00:19:07.295 fused_ordering(13) 00:19:07.295 fused_ordering(14) 00:19:07.295 fused_ordering(15) 00:19:07.295 fused_ordering(16) 00:19:07.295 fused_ordering(17) 00:19:07.295 fused_ordering(18) 00:19:07.295 fused_ordering(19) 00:19:07.295 fused_ordering(20) 00:19:07.295 fused_ordering(21) 00:19:07.295 fused_ordering(22) 00:19:07.295 fused_ordering(23) 00:19:07.295 fused_ordering(24) 00:19:07.295 fused_ordering(25) 00:19:07.295 fused_ordering(26) 00:19:07.295 fused_ordering(27) 00:19:07.295 fused_ordering(28) 00:19:07.295 fused_ordering(29) 00:19:07.295 fused_ordering(30) 00:19:07.295 fused_ordering(31) 00:19:07.295 fused_ordering(32) 00:19:07.295 fused_ordering(33) 00:19:07.295 fused_ordering(34) 00:19:07.295 fused_ordering(35) 00:19:07.295 fused_ordering(36) 00:19:07.295 fused_ordering(37) 00:19:07.295 fused_ordering(38) 00:19:07.295 fused_ordering(39) 00:19:07.295 fused_ordering(40) 00:19:07.295 fused_ordering(41) 00:19:07.295 fused_ordering(42) 00:19:07.295 fused_ordering(43) 00:19:07.295 fused_ordering(44) 00:19:07.295 fused_ordering(45) 00:19:07.295 fused_ordering(46) 00:19:07.295 fused_ordering(47) 00:19:07.295 fused_ordering(48) 00:19:07.295 fused_ordering(49) 00:19:07.295 fused_ordering(50) 00:19:07.295 fused_ordering(51) 00:19:07.295 fused_ordering(52) 00:19:07.295 fused_ordering(53) 00:19:07.295 fused_ordering(54) 00:19:07.295 fused_ordering(55) 00:19:07.295 fused_ordering(56) 00:19:07.295 fused_ordering(57) 00:19:07.295 fused_ordering(58) 00:19:07.295 fused_ordering(59) 00:19:07.295 fused_ordering(60) 00:19:07.295 fused_ordering(61) 00:19:07.295 fused_ordering(62) 00:19:07.295 fused_ordering(63) 00:19:07.295 fused_ordering(64) 00:19:07.295 fused_ordering(65) 00:19:07.295 fused_ordering(66) 00:19:07.295 fused_ordering(67) 00:19:07.295 fused_ordering(68) 00:19:07.295 fused_ordering(69) 00:19:07.295 fused_ordering(70) 00:19:07.295 fused_ordering(71) 00:19:07.295 fused_ordering(72) 00:19:07.295 fused_ordering(73) 00:19:07.295 fused_ordering(74) 00:19:07.295 fused_ordering(75) 00:19:07.295 fused_ordering(76) 00:19:07.295 fused_ordering(77) 00:19:07.295 fused_ordering(78) 00:19:07.295 fused_ordering(79) 00:19:07.295 fused_ordering(80) 00:19:07.295 fused_ordering(81) 00:19:07.295 fused_ordering(82) 00:19:07.295 fused_ordering(83) 00:19:07.295 fused_ordering(84) 00:19:07.295 fused_ordering(85) 00:19:07.295 fused_ordering(86) 00:19:07.296 fused_ordering(87) 00:19:07.296 fused_ordering(88) 00:19:07.296 fused_ordering(89) 00:19:07.296 fused_ordering(90) 00:19:07.296 fused_ordering(91) 00:19:07.296 fused_ordering(92) 00:19:07.296 fused_ordering(93) 00:19:07.296 fused_ordering(94) 00:19:07.296 fused_ordering(95) 00:19:07.296 fused_ordering(96) 00:19:07.296 fused_ordering(97) 00:19:07.296 fused_ordering(98) 00:19:07.296 fused_ordering(99) 00:19:07.296 fused_ordering(100) 00:19:07.296 fused_ordering(101) 00:19:07.296 fused_ordering(102) 00:19:07.296 fused_ordering(103) 00:19:07.296 fused_ordering(104) 00:19:07.296 fused_ordering(105) 00:19:07.296 fused_ordering(106) 00:19:07.296 fused_ordering(107) 00:19:07.296 fused_ordering(108) 00:19:07.296 fused_ordering(109) 00:19:07.296 fused_ordering(110) 00:19:07.296 fused_ordering(111) 00:19:07.296 fused_ordering(112) 00:19:07.296 fused_ordering(113) 00:19:07.296 fused_ordering(114) 00:19:07.296 fused_ordering(115) 00:19:07.296 fused_ordering(116) 00:19:07.296 fused_ordering(117) 00:19:07.296 fused_ordering(118) 00:19:07.296 fused_ordering(119) 00:19:07.296 fused_ordering(120) 00:19:07.296 fused_ordering(121) 00:19:07.296 fused_ordering(122) 00:19:07.296 fused_ordering(123) 00:19:07.296 fused_ordering(124) 00:19:07.296 fused_ordering(125) 00:19:07.296 fused_ordering(126) 00:19:07.296 fused_ordering(127) 00:19:07.296 fused_ordering(128) 00:19:07.296 fused_ordering(129) 00:19:07.296 fused_ordering(130) 00:19:07.296 fused_ordering(131) 00:19:07.296 fused_ordering(132) 00:19:07.296 fused_ordering(133) 00:19:07.296 fused_ordering(134) 00:19:07.296 fused_ordering(135) 00:19:07.296 fused_ordering(136) 00:19:07.296 fused_ordering(137) 00:19:07.296 fused_ordering(138) 00:19:07.296 fused_ordering(139) 00:19:07.296 fused_ordering(140) 00:19:07.296 fused_ordering(141) 00:19:07.296 fused_ordering(142) 00:19:07.296 fused_ordering(143) 00:19:07.296 fused_ordering(144) 00:19:07.296 fused_ordering(145) 00:19:07.296 fused_ordering(146) 00:19:07.296 fused_ordering(147) 00:19:07.296 fused_ordering(148) 00:19:07.296 fused_ordering(149) 00:19:07.296 fused_ordering(150) 00:19:07.296 fused_ordering(151) 00:19:07.296 fused_ordering(152) 00:19:07.296 fused_ordering(153) 00:19:07.296 fused_ordering(154) 00:19:07.296 fused_ordering(155) 00:19:07.296 fused_ordering(156) 00:19:07.296 fused_ordering(157) 00:19:07.296 fused_ordering(158) 00:19:07.296 fused_ordering(159) 00:19:07.296 fused_ordering(160) 00:19:07.296 fused_ordering(161) 00:19:07.296 fused_ordering(162) 00:19:07.296 fused_ordering(163) 00:19:07.296 fused_ordering(164) 00:19:07.296 fused_ordering(165) 00:19:07.296 fused_ordering(166) 00:19:07.296 fused_ordering(167) 00:19:07.296 fused_ordering(168) 00:19:07.296 fused_ordering(169) 00:19:07.296 fused_ordering(170) 00:19:07.296 fused_ordering(171) 00:19:07.296 fused_ordering(172) 00:19:07.296 fused_ordering(173) 00:19:07.296 fused_ordering(174) 00:19:07.296 fused_ordering(175) 00:19:07.296 fused_ordering(176) 00:19:07.296 fused_ordering(177) 00:19:07.296 fused_ordering(178) 00:19:07.296 fused_ordering(179) 00:19:07.296 fused_ordering(180) 00:19:07.296 fused_ordering(181) 00:19:07.296 fused_ordering(182) 00:19:07.296 fused_ordering(183) 00:19:07.296 fused_ordering(184) 00:19:07.296 fused_ordering(185) 00:19:07.296 fused_ordering(186) 00:19:07.296 fused_ordering(187) 00:19:07.296 fused_ordering(188) 00:19:07.296 fused_ordering(189) 00:19:07.296 fused_ordering(190) 00:19:07.296 fused_ordering(191) 00:19:07.296 fused_ordering(192) 00:19:07.296 fused_ordering(193) 00:19:07.296 fused_ordering(194) 00:19:07.296 fused_ordering(195) 00:19:07.296 fused_ordering(196) 00:19:07.296 fused_ordering(197) 00:19:07.296 fused_ordering(198) 00:19:07.296 fused_ordering(199) 00:19:07.296 fused_ordering(200) 00:19:07.296 fused_ordering(201) 00:19:07.296 fused_ordering(202) 00:19:07.296 fused_ordering(203) 00:19:07.296 fused_ordering(204) 00:19:07.296 fused_ordering(205) 00:19:07.296 fused_ordering(206) 00:19:07.296 fused_ordering(207) 00:19:07.296 fused_ordering(208) 00:19:07.296 fused_ordering(209) 00:19:07.296 fused_ordering(210) 00:19:07.296 fused_ordering(211) 00:19:07.296 fused_ordering(212) 00:19:07.296 fused_ordering(213) 00:19:07.296 fused_ordering(214) 00:19:07.296 fused_ordering(215) 00:19:07.296 fused_ordering(216) 00:19:07.296 fused_ordering(217) 00:19:07.296 fused_ordering(218) 00:19:07.296 fused_ordering(219) 00:19:07.296 fused_ordering(220) 00:19:07.296 fused_ordering(221) 00:19:07.296 fused_ordering(222) 00:19:07.296 fused_ordering(223) 00:19:07.296 fused_ordering(224) 00:19:07.296 fused_ordering(225) 00:19:07.296 fused_ordering(226) 00:19:07.296 fused_ordering(227) 00:19:07.296 fused_ordering(228) 00:19:07.296 fused_ordering(229) 00:19:07.296 fused_ordering(230) 00:19:07.296 fused_ordering(231) 00:19:07.296 fused_ordering(232) 00:19:07.296 fused_ordering(233) 00:19:07.296 fused_ordering(234) 00:19:07.296 fused_ordering(235) 00:19:07.296 fused_ordering(236) 00:19:07.296 fused_ordering(237) 00:19:07.296 fused_ordering(238) 00:19:07.296 fused_ordering(239) 00:19:07.296 fused_ordering(240) 00:19:07.296 fused_ordering(241) 00:19:07.296 fused_ordering(242) 00:19:07.296 fused_ordering(243) 00:19:07.296 fused_ordering(244) 00:19:07.296 fused_ordering(245) 00:19:07.296 fused_ordering(246) 00:19:07.296 fused_ordering(247) 00:19:07.296 fused_ordering(248) 00:19:07.296 fused_ordering(249) 00:19:07.296 fused_ordering(250) 00:19:07.296 fused_ordering(251) 00:19:07.296 fused_ordering(252) 00:19:07.296 fused_ordering(253) 00:19:07.296 fused_ordering(254) 00:19:07.296 fused_ordering(255) 00:19:07.296 fused_ordering(256) 00:19:07.296 fused_ordering(257) 00:19:07.296 fused_ordering(258) 00:19:07.296 fused_ordering(259) 00:19:07.296 fused_ordering(260) 00:19:07.296 fused_ordering(261) 00:19:07.296 fused_ordering(262) 00:19:07.296 fused_ordering(263) 00:19:07.296 fused_ordering(264) 00:19:07.296 fused_ordering(265) 00:19:07.296 fused_ordering(266) 00:19:07.296 fused_ordering(267) 00:19:07.296 fused_ordering(268) 00:19:07.296 fused_ordering(269) 00:19:07.296 fused_ordering(270) 00:19:07.296 fused_ordering(271) 00:19:07.296 fused_ordering(272) 00:19:07.296 fused_ordering(273) 00:19:07.296 fused_ordering(274) 00:19:07.296 fused_ordering(275) 00:19:07.296 fused_ordering(276) 00:19:07.296 fused_ordering(277) 00:19:07.296 fused_ordering(278) 00:19:07.296 fused_ordering(279) 00:19:07.296 fused_ordering(280) 00:19:07.296 fused_ordering(281) 00:19:07.296 fused_ordering(282) 00:19:07.296 fused_ordering(283) 00:19:07.296 fused_ordering(284) 00:19:07.296 fused_ordering(285) 00:19:07.296 fused_ordering(286) 00:19:07.296 fused_ordering(287) 00:19:07.296 fused_ordering(288) 00:19:07.296 fused_ordering(289) 00:19:07.296 fused_ordering(290) 00:19:07.296 fused_ordering(291) 00:19:07.296 fused_ordering(292) 00:19:07.296 fused_ordering(293) 00:19:07.296 fused_ordering(294) 00:19:07.296 fused_ordering(295) 00:19:07.296 fused_ordering(296) 00:19:07.296 fused_ordering(297) 00:19:07.296 fused_ordering(298) 00:19:07.296 fused_ordering(299) 00:19:07.296 fused_ordering(300) 00:19:07.296 fused_ordering(301) 00:19:07.296 fused_ordering(302) 00:19:07.296 fused_ordering(303) 00:19:07.296 fused_ordering(304) 00:19:07.296 fused_ordering(305) 00:19:07.296 fused_ordering(306) 00:19:07.297 fused_ordering(307) 00:19:07.297 fused_ordering(308) 00:19:07.297 fused_ordering(309) 00:19:07.297 fused_ordering(310) 00:19:07.297 fused_ordering(311) 00:19:07.297 fused_ordering(312) 00:19:07.297 fused_ordering(313) 00:19:07.297 fused_ordering(314) 00:19:07.297 fused_ordering(315) 00:19:07.297 fused_ordering(316) 00:19:07.297 fused_ordering(317) 00:19:07.297 fused_ordering(318) 00:19:07.297 fused_ordering(319) 00:19:07.297 fused_ordering(320) 00:19:07.297 fused_ordering(321) 00:19:07.297 fused_ordering(322) 00:19:07.297 fused_ordering(323) 00:19:07.297 fused_ordering(324) 00:19:07.297 fused_ordering(325) 00:19:07.297 fused_ordering(326) 00:19:07.297 fused_ordering(327) 00:19:07.297 fused_ordering(328) 00:19:07.297 fused_ordering(329) 00:19:07.297 fused_ordering(330) 00:19:07.297 fused_ordering(331) 00:19:07.297 fused_ordering(332) 00:19:07.297 fused_ordering(333) 00:19:07.297 fused_ordering(334) 00:19:07.297 fused_ordering(335) 00:19:07.297 fused_ordering(336) 00:19:07.297 fused_ordering(337) 00:19:07.297 fused_ordering(338) 00:19:07.297 fused_ordering(339) 00:19:07.297 fused_ordering(340) 00:19:07.297 fused_ordering(341) 00:19:07.297 fused_ordering(342) 00:19:07.297 fused_ordering(343) 00:19:07.297 fused_ordering(344) 00:19:07.297 fused_ordering(345) 00:19:07.297 fused_ordering(346) 00:19:07.297 fused_ordering(347) 00:19:07.297 fused_ordering(348) 00:19:07.297 fused_ordering(349) 00:19:07.297 fused_ordering(350) 00:19:07.297 fused_ordering(351) 00:19:07.297 fused_ordering(352) 00:19:07.297 fused_ordering(353) 00:19:07.297 fused_ordering(354) 00:19:07.297 fused_ordering(355) 00:19:07.297 fused_ordering(356) 00:19:07.297 fused_ordering(357) 00:19:07.297 fused_ordering(358) 00:19:07.297 fused_ordering(359) 00:19:07.297 fused_ordering(360) 00:19:07.297 fused_ordering(361) 00:19:07.297 fused_ordering(362) 00:19:07.297 fused_ordering(363) 00:19:07.297 fused_ordering(364) 00:19:07.297 fused_ordering(365) 00:19:07.297 fused_ordering(366) 00:19:07.297 fused_ordering(367) 00:19:07.297 fused_ordering(368) 00:19:07.297 fused_ordering(369) 00:19:07.297 fused_ordering(370) 00:19:07.297 fused_ordering(371) 00:19:07.297 fused_ordering(372) 00:19:07.297 fused_ordering(373) 00:19:07.297 fused_ordering(374) 00:19:07.297 fused_ordering(375) 00:19:07.297 fused_ordering(376) 00:19:07.297 fused_ordering(377) 00:19:07.297 fused_ordering(378) 00:19:07.297 fused_ordering(379) 00:19:07.297 fused_ordering(380) 00:19:07.297 fused_ordering(381) 00:19:07.297 fused_ordering(382) 00:19:07.297 fused_ordering(383) 00:19:07.297 fused_ordering(384) 00:19:07.297 fused_ordering(385) 00:19:07.297 fused_ordering(386) 00:19:07.297 fused_ordering(387) 00:19:07.297 fused_ordering(388) 00:19:07.297 fused_ordering(389) 00:19:07.297 fused_ordering(390) 00:19:07.297 fused_ordering(391) 00:19:07.297 fused_ordering(392) 00:19:07.297 fused_ordering(393) 00:19:07.297 fused_ordering(394) 00:19:07.297 fused_ordering(395) 00:19:07.297 fused_ordering(396) 00:19:07.297 fused_ordering(397) 00:19:07.297 fused_ordering(398) 00:19:07.297 fused_ordering(399) 00:19:07.297 fused_ordering(400) 00:19:07.297 fused_ordering(401) 00:19:07.297 fused_ordering(402) 00:19:07.297 fused_ordering(403) 00:19:07.297 fused_ordering(404) 00:19:07.297 fused_ordering(405) 00:19:07.297 fused_ordering(406) 00:19:07.297 fused_ordering(407) 00:19:07.297 fused_ordering(408) 00:19:07.297 fused_ordering(409) 00:19:07.297 fused_ordering(410) 00:19:07.297 fused_ordering(411) 00:19:07.297 fused_ordering(412) 00:19:07.297 fused_ordering(413) 00:19:07.297 fused_ordering(414) 00:19:07.297 fused_ordering(415) 00:19:07.297 fused_ordering(416) 00:19:07.297 fused_ordering(417) 00:19:07.297 fused_ordering(418) 00:19:07.297 fused_ordering(419) 00:19:07.297 fused_ordering(420) 00:19:07.297 fused_ordering(421) 00:19:07.297 fused_ordering(422) 00:19:07.297 fused_ordering(423) 00:19:07.297 fused_ordering(424) 00:19:07.297 fused_ordering(425) 00:19:07.297 fused_ordering(426) 00:19:07.297 fused_ordering(427) 00:19:07.297 fused_ordering(428) 00:19:07.297 fused_ordering(429) 00:19:07.297 fused_ordering(430) 00:19:07.297 fused_ordering(431) 00:19:07.297 fused_ordering(432) 00:19:07.297 fused_ordering(433) 00:19:07.297 fused_ordering(434) 00:19:07.297 fused_ordering(435) 00:19:07.297 fused_ordering(436) 00:19:07.297 fused_ordering(437) 00:19:07.297 fused_ordering(438) 00:19:07.297 fused_ordering(439) 00:19:07.297 fused_ordering(440) 00:19:07.297 fused_ordering(441) 00:19:07.297 fused_ordering(442) 00:19:07.297 fused_ordering(443) 00:19:07.297 fused_ordering(444) 00:19:07.297 fused_ordering(445) 00:19:07.297 fused_ordering(446) 00:19:07.297 fused_ordering(447) 00:19:07.297 fused_ordering(448) 00:19:07.297 fused_ordering(449) 00:19:07.297 fused_ordering(450) 00:19:07.297 fused_ordering(451) 00:19:07.297 fused_ordering(452) 00:19:07.297 fused_ordering(453) 00:19:07.297 fused_ordering(454) 00:19:07.297 fused_ordering(455) 00:19:07.297 fused_ordering(456) 00:19:07.297 fused_ordering(457) 00:19:07.297 fused_ordering(458) 00:19:07.297 fused_ordering(459) 00:19:07.297 fused_ordering(460) 00:19:07.297 fused_ordering(461) 00:19:07.297 fused_ordering(462) 00:19:07.297 fused_ordering(463) 00:19:07.297 fused_ordering(464) 00:19:07.297 fused_ordering(465) 00:19:07.297 fused_ordering(466) 00:19:07.297 fused_ordering(467) 00:19:07.297 fused_ordering(468) 00:19:07.297 fused_ordering(469) 00:19:07.297 fused_ordering(470) 00:19:07.297 fused_ordering(471) 00:19:07.297 fused_ordering(472) 00:19:07.297 fused_ordering(473) 00:19:07.297 fused_ordering(474) 00:19:07.297 fused_ordering(475) 00:19:07.297 fused_ordering(476) 00:19:07.297 fused_ordering(477) 00:19:07.297 fused_ordering(478) 00:19:07.297 fused_ordering(479) 00:19:07.297 fused_ordering(480) 00:19:07.297 fused_ordering(481) 00:19:07.297 fused_ordering(482) 00:19:07.297 fused_ordering(483) 00:19:07.297 fused_ordering(484) 00:19:07.297 fused_ordering(485) 00:19:07.297 fused_ordering(486) 00:19:07.297 fused_ordering(487) 00:19:07.297 fused_ordering(488) 00:19:07.297 fused_ordering(489) 00:19:07.297 fused_ordering(490) 00:19:07.297 fused_ordering(491) 00:19:07.297 fused_ordering(492) 00:19:07.297 fused_ordering(493) 00:19:07.297 fused_ordering(494) 00:19:07.297 fused_ordering(495) 00:19:07.297 fused_ordering(496) 00:19:07.297 fused_ordering(497) 00:19:07.297 fused_ordering(498) 00:19:07.297 fused_ordering(499) 00:19:07.297 fused_ordering(500) 00:19:07.297 fused_ordering(501) 00:19:07.297 fused_ordering(502) 00:19:07.297 fused_ordering(503) 00:19:07.297 fused_ordering(504) 00:19:07.297 fused_ordering(505) 00:19:07.297 fused_ordering(506) 00:19:07.297 fused_ordering(507) 00:19:07.297 fused_ordering(508) 00:19:07.297 fused_ordering(509) 00:19:07.297 fused_ordering(510) 00:19:07.297 fused_ordering(511) 00:19:07.297 fused_ordering(512) 00:19:07.297 fused_ordering(513) 00:19:07.297 fused_ordering(514) 00:19:07.297 fused_ordering(515) 00:19:07.297 fused_ordering(516) 00:19:07.297 fused_ordering(517) 00:19:07.297 fused_ordering(518) 00:19:07.297 fused_ordering(519) 00:19:07.297 fused_ordering(520) 00:19:07.297 fused_ordering(521) 00:19:07.297 fused_ordering(522) 00:19:07.297 fused_ordering(523) 00:19:07.297 fused_ordering(524) 00:19:07.297 fused_ordering(525) 00:19:07.298 fused_ordering(526) 00:19:07.298 fused_ordering(527) 00:19:07.298 fused_ordering(528) 00:19:07.298 fused_ordering(529) 00:19:07.298 fused_ordering(530) 00:19:07.298 fused_ordering(531) 00:19:07.298 fused_ordering(532) 00:19:07.298 fused_ordering(533) 00:19:07.298 fused_ordering(534) 00:19:07.298 fused_ordering(535) 00:19:07.298 fused_ordering(536) 00:19:07.298 fused_ordering(537) 00:19:07.298 fused_ordering(538) 00:19:07.298 fused_ordering(539) 00:19:07.298 fused_ordering(540) 00:19:07.298 fused_ordering(541) 00:19:07.298 fused_ordering(542) 00:19:07.298 fused_ordering(543) 00:19:07.298 fused_ordering(544) 00:19:07.298 fused_ordering(545) 00:19:07.298 fused_ordering(546) 00:19:07.298 fused_ordering(547) 00:19:07.298 fused_ordering(548) 00:19:07.298 fused_ordering(549) 00:19:07.298 fused_ordering(550) 00:19:07.298 fused_ordering(551) 00:19:07.298 fused_ordering(552) 00:19:07.298 fused_ordering(553) 00:19:07.298 fused_ordering(554) 00:19:07.298 fused_ordering(555) 00:19:07.298 fused_ordering(556) 00:19:07.298 fused_ordering(557) 00:19:07.298 fused_ordering(558) 00:19:07.298 fused_ordering(559) 00:19:07.298 fused_ordering(560) 00:19:07.298 fused_ordering(561) 00:19:07.298 fused_ordering(562) 00:19:07.298 fused_ordering(563) 00:19:07.298 fused_ordering(564) 00:19:07.298 fused_ordering(565) 00:19:07.298 fused_ordering(566) 00:19:07.298 fused_ordering(567) 00:19:07.298 fused_ordering(568) 00:19:07.298 fused_ordering(569) 00:19:07.298 fused_ordering(570) 00:19:07.298 fused_ordering(571) 00:19:07.298 fused_ordering(572) 00:19:07.298 fused_ordering(573) 00:19:07.298 fused_ordering(574) 00:19:07.298 fused_ordering(575) 00:19:07.298 fused_ordering(576) 00:19:07.298 fused_ordering(577) 00:19:07.298 fused_ordering(578) 00:19:07.298 fused_ordering(579) 00:19:07.298 fused_ordering(580) 00:19:07.298 fused_ordering(581) 00:19:07.298 fused_ordering(582) 00:19:07.298 fused_ordering(583) 00:19:07.298 fused_ordering(584) 00:19:07.298 fused_ordering(585) 00:19:07.298 fused_ordering(586) 00:19:07.298 fused_ordering(587) 00:19:07.298 fused_ordering(588) 00:19:07.298 fused_ordering(589) 00:19:07.298 fused_ordering(590) 00:19:07.298 fused_ordering(591) 00:19:07.298 fused_ordering(592) 00:19:07.298 fused_ordering(593) 00:19:07.298 fused_ordering(594) 00:19:07.298 fused_ordering(595) 00:19:07.298 fused_ordering(596) 00:19:07.298 fused_ordering(597) 00:19:07.298 fused_ordering(598) 00:19:07.298 fused_ordering(599) 00:19:07.298 fused_ordering(600) 00:19:07.298 fused_ordering(601) 00:19:07.298 fused_ordering(602) 00:19:07.298 fused_ordering(603) 00:19:07.298 fused_ordering(604) 00:19:07.298 fused_ordering(605) 00:19:07.298 fused_ordering(606) 00:19:07.298 fused_ordering(607) 00:19:07.298 fused_ordering(608) 00:19:07.298 fused_ordering(609) 00:19:07.298 fused_ordering(610) 00:19:07.298 fused_ordering(611) 00:19:07.298 fused_ordering(612) 00:19:07.298 fused_ordering(613) 00:19:07.298 fused_ordering(614) 00:19:07.298 fused_ordering(615) 00:19:07.298 fused_ordering(616) 00:19:07.298 fused_ordering(617) 00:19:07.298 fused_ordering(618) 00:19:07.298 fused_ordering(619) 00:19:07.298 fused_ordering(620) 00:19:07.298 fused_ordering(621) 00:19:07.298 fused_ordering(622) 00:19:07.298 fused_ordering(623) 00:19:07.298 fused_ordering(624) 00:19:07.298 fused_ordering(625) 00:19:07.298 fused_ordering(626) 00:19:07.298 fused_ordering(627) 00:19:07.298 fused_ordering(628) 00:19:07.298 fused_ordering(629) 00:19:07.298 fused_ordering(630) 00:19:07.298 fused_ordering(631) 00:19:07.298 fused_ordering(632) 00:19:07.298 fused_ordering(633) 00:19:07.298 fused_ordering(634) 00:19:07.298 fused_ordering(635) 00:19:07.298 fused_ordering(636) 00:19:07.298 fused_ordering(637) 00:19:07.298 fused_ordering(638) 00:19:07.298 fused_ordering(639) 00:19:07.298 fused_ordering(640) 00:19:07.298 fused_ordering(641) 00:19:07.298 fused_ordering(642) 00:19:07.298 fused_ordering(643) 00:19:07.298 fused_ordering(644) 00:19:07.298 fused_ordering(645) 00:19:07.298 fused_ordering(646) 00:19:07.298 fused_ordering(647) 00:19:07.298 fused_ordering(648) 00:19:07.298 fused_ordering(649) 00:19:07.298 fused_ordering(650) 00:19:07.298 fused_ordering(651) 00:19:07.298 fused_ordering(652) 00:19:07.298 fused_ordering(653) 00:19:07.298 fused_ordering(654) 00:19:07.298 fused_ordering(655) 00:19:07.298 fused_ordering(656) 00:19:07.298 fused_ordering(657) 00:19:07.298 fused_ordering(658) 00:19:07.298 fused_ordering(659) 00:19:07.298 fused_ordering(660) 00:19:07.298 fused_ordering(661) 00:19:07.298 fused_ordering(662) 00:19:07.298 fused_ordering(663) 00:19:07.298 fused_ordering(664) 00:19:07.298 fused_ordering(665) 00:19:07.298 fused_ordering(666) 00:19:07.298 fused_ordering(667) 00:19:07.298 fused_ordering(668) 00:19:07.298 fused_ordering(669) 00:19:07.298 fused_ordering(670) 00:19:07.298 fused_ordering(671) 00:19:07.298 fused_ordering(672) 00:19:07.298 fused_ordering(673) 00:19:07.298 fused_ordering(674) 00:19:07.298 fused_ordering(675) 00:19:07.298 fused_ordering(676) 00:19:07.298 fused_ordering(677) 00:19:07.298 fused_ordering(678) 00:19:07.298 fused_ordering(679) 00:19:07.298 fused_ordering(680) 00:19:07.298 fused_ordering(681) 00:19:07.298 fused_ordering(682) 00:19:07.298 fused_ordering(683) 00:19:07.298 fused_ordering(684) 00:19:07.298 fused_ordering(685) 00:19:07.298 fused_ordering(686) 00:19:07.298 fused_ordering(687) 00:19:07.298 fused_ordering(688) 00:19:07.298 fused_ordering(689) 00:19:07.298 fused_ordering(690) 00:19:07.298 fused_ordering(691) 00:19:07.298 fused_ordering(692) 00:19:07.298 fused_ordering(693) 00:19:07.298 fused_ordering(694) 00:19:07.298 fused_ordering(695) 00:19:07.298 fused_ordering(696) 00:19:07.298 fused_ordering(697) 00:19:07.298 fused_ordering(698) 00:19:07.298 fused_ordering(699) 00:19:07.298 fused_ordering(700) 00:19:07.298 fused_ordering(701) 00:19:07.298 fused_ordering(702) 00:19:07.298 fused_ordering(703) 00:19:07.298 fused_ordering(704) 00:19:07.298 fused_ordering(705) 00:19:07.298 fused_ordering(706) 00:19:07.298 fused_ordering(707) 00:19:07.298 fused_ordering(708) 00:19:07.298 fused_ordering(709) 00:19:07.299 fused_ordering(710) 00:19:07.299 fused_ordering(711) 00:19:07.299 fused_ordering(712) 00:19:07.299 fused_ordering(713) 00:19:07.299 fused_ordering(714) 00:19:07.299 fused_ordering(715) 00:19:07.299 fused_ordering(716) 00:19:07.299 fused_ordering(717) 00:19:07.299 fused_ordering(718) 00:19:07.299 fused_ordering(719) 00:19:07.299 fused_ordering(720) 00:19:07.299 fused_ordering(721) 00:19:07.299 fused_ordering(722) 00:19:07.299 fused_ordering(723) 00:19:07.299 fused_ordering(724) 00:19:07.299 fused_ordering(725) 00:19:07.299 fused_ordering(726) 00:19:07.299 fused_ordering(727) 00:19:07.299 fused_ordering(728) 00:19:07.299 fused_ordering(729) 00:19:07.299 fused_ordering(730) 00:19:07.299 fused_ordering(731) 00:19:07.299 fused_ordering(732) 00:19:07.299 fused_ordering(733) 00:19:07.299 fused_ordering(734) 00:19:07.299 fused_ordering(735) 00:19:07.299 fused_ordering(736) 00:19:07.299 fused_ordering(737) 00:19:07.299 fused_ordering(738) 00:19:07.299 fused_ordering(739) 00:19:07.299 fused_ordering(740) 00:19:07.299 fused_ordering(741) 00:19:07.299 fused_ordering(742) 00:19:07.299 fused_ordering(743) 00:19:07.299 fused_ordering(744) 00:19:07.299 fused_ordering(745) 00:19:07.299 fused_ordering(746) 00:19:07.299 fused_ordering(747) 00:19:07.299 fused_ordering(748) 00:19:07.299 fused_ordering(749) 00:19:07.299 fused_ordering(750) 00:19:07.299 fused_ordering(751) 00:19:07.299 fused_ordering(752) 00:19:07.299 fused_ordering(753) 00:19:07.299 fused_ordering(754) 00:19:07.299 fused_ordering(755) 00:19:07.299 fused_ordering(756) 00:19:07.299 fused_ordering(757) 00:19:07.299 fused_ordering(758) 00:19:07.299 fused_ordering(759) 00:19:07.299 fused_ordering(760) 00:19:07.299 fused_ordering(761) 00:19:07.299 fused_ordering(762) 00:19:07.299 fused_ordering(763) 00:19:07.299 fused_ordering(764) 00:19:07.299 fused_ordering(765) 00:19:07.299 fused_ordering(766) 00:19:07.299 fused_ordering(767) 00:19:07.299 fused_ordering(768) 00:19:07.299 fused_ordering(769) 00:19:07.299 fused_ordering(770) 00:19:07.299 fused_ordering(771) 00:19:07.299 fused_ordering(772) 00:19:07.299 fused_ordering(773) 00:19:07.299 fused_ordering(774) 00:19:07.299 fused_ordering(775) 00:19:07.299 fused_ordering(776) 00:19:07.299 fused_ordering(777) 00:19:07.299 fused_ordering(778) 00:19:07.299 fused_ordering(779) 00:19:07.299 fused_ordering(780) 00:19:07.299 fused_ordering(781) 00:19:07.299 fused_ordering(782) 00:19:07.299 fused_ordering(783) 00:19:07.299 fused_ordering(784) 00:19:07.299 fused_ordering(785) 00:19:07.299 fused_ordering(786) 00:19:07.299 fused_ordering(787) 00:19:07.299 fused_ordering(788) 00:19:07.299 fused_ordering(789) 00:19:07.299 fused_ordering(790) 00:19:07.299 fused_ordering(791) 00:19:07.299 fused_ordering(792) 00:19:07.299 fused_ordering(793) 00:19:07.299 fused_ordering(794) 00:19:07.299 fused_ordering(795) 00:19:07.299 fused_ordering(796) 00:19:07.299 fused_ordering(797) 00:19:07.299 fused_ordering(798) 00:19:07.299 fused_ordering(799) 00:19:07.299 fused_ordering(800) 00:19:07.299 fused_ordering(801) 00:19:07.299 fused_ordering(802) 00:19:07.299 fused_ordering(803) 00:19:07.299 fused_ordering(804) 00:19:07.299 fused_ordering(805) 00:19:07.299 fused_ordering(806) 00:19:07.299 fused_ordering(807) 00:19:07.299 fused_ordering(808) 00:19:07.299 fused_ordering(809) 00:19:07.299 fused_ordering(810) 00:19:07.299 fused_ordering(811) 00:19:07.299 fused_ordering(812) 00:19:07.299 fused_ordering(813) 00:19:07.299 fused_ordering(814) 00:19:07.299 fused_ordering(815) 00:19:07.299 fused_ordering(816) 00:19:07.299 fused_ordering(817) 00:19:07.299 fused_ordering(818) 00:19:07.299 fused_ordering(819) 00:19:07.299 fused_ordering(820) 00:19:07.299 fused_ordering(821) 00:19:07.299 fused_ordering(822) 00:19:07.299 fused_ordering(823) 00:19:07.299 fused_ordering(824) 00:19:07.299 fused_ordering(825) 00:19:07.299 fused_ordering(826) 00:19:07.299 fused_ordering(827) 00:19:07.299 fused_ordering(828) 00:19:07.299 fused_ordering(829) 00:19:07.299 fused_ordering(830) 00:19:07.299 fused_ordering(831) 00:19:07.299 fused_ordering(832) 00:19:07.299 fused_ordering(833) 00:19:07.299 fused_ordering(834) 00:19:07.299 fused_ordering(835) 00:19:07.299 fused_ordering(836) 00:19:07.299 fused_ordering(837) 00:19:07.299 fused_ordering(838) 00:19:07.299 fused_ordering(839) 00:19:07.299 fused_ordering(840) 00:19:07.299 fused_ordering(841) 00:19:07.299 fused_ordering(842) 00:19:07.299 fused_ordering(843) 00:19:07.299 fused_ordering(844) 00:19:07.299 fused_ordering(845) 00:19:07.299 fused_ordering(846) 00:19:07.299 fused_ordering(847) 00:19:07.299 fused_ordering(848) 00:19:07.299 fused_ordering(849) 00:19:07.299 fused_ordering(850) 00:19:07.299 fused_ordering(851) 00:19:07.299 fused_ordering(852) 00:19:07.299 fused_ordering(853) 00:19:07.299 fused_ordering(854) 00:19:07.299 fused_ordering(855) 00:19:07.299 fused_ordering(856) 00:19:07.299 fused_ordering(857) 00:19:07.299 fused_ordering(858) 00:19:07.299 fused_ordering(859) 00:19:07.299 fused_ordering(860) 00:19:07.299 fused_ordering(861) 00:19:07.299 fused_ordering(862) 00:19:07.299 fused_ordering(863) 00:19:07.299 fused_ordering(864) 00:19:07.299 fused_ordering(865) 00:19:07.299 fused_ordering(866) 00:19:07.299 fused_ordering(867) 00:19:07.299 fused_ordering(868) 00:19:07.299 fused_ordering(869) 00:19:07.299 fused_ordering(870) 00:19:07.299 fused_ordering(871) 00:19:07.299 fused_ordering(872) 00:19:07.299 fused_ordering(873) 00:19:07.299 fused_ordering(874) 00:19:07.299 fused_ordering(875) 00:19:07.299 fused_ordering(876) 00:19:07.299 fused_ordering(877) 00:19:07.299 fused_ordering(878) 00:19:07.299 fused_ordering(879) 00:19:07.299 fused_ordering(880) 00:19:07.299 fused_ordering(881) 00:19:07.299 fused_ordering(882) 00:19:07.299 fused_ordering(883) 00:19:07.299 fused_ordering(884) 00:19:07.299 fused_ordering(885) 00:19:07.299 fused_ordering(886) 00:19:07.299 fused_ordering(887) 00:19:07.299 fused_ordering(888) 00:19:07.299 fused_ordering(889) 00:19:07.299 fused_ordering(890) 00:19:07.299 fused_ordering(891) 00:19:07.299 fused_ordering(892) 00:19:07.299 fused_ordering(893) 00:19:07.299 fused_ordering(894) 00:19:07.299 fused_ordering(895) 00:19:07.299 fused_ordering(896) 00:19:07.299 fused_ordering(897) 00:19:07.299 fused_ordering(898) 00:19:07.299 fused_ordering(899) 00:19:07.299 fused_ordering(900) 00:19:07.299 fused_ordering(901) 00:19:07.299 fused_ordering(902) 00:19:07.299 fused_ordering(903) 00:19:07.299 fused_ordering(904) 00:19:07.299 fused_ordering(905) 00:19:07.299 fused_ordering(906) 00:19:07.299 fused_ordering(907) 00:19:07.299 fused_ordering(908) 00:19:07.299 fused_ordering(909) 00:19:07.299 fused_ordering(910) 00:19:07.299 fused_ordering(911) 00:19:07.299 fused_ordering(912) 00:19:07.299 fused_ordering(913) 00:19:07.299 fused_ordering(914) 00:19:07.299 fused_ordering(915) 00:19:07.299 fused_ordering(916) 00:19:07.299 fused_ordering(917) 00:19:07.299 fused_ordering(918) 00:19:07.299 fused_ordering(919) 00:19:07.299 fused_ordering(920) 00:19:07.299 fused_ordering(921) 00:19:07.299 fused_ordering(922) 00:19:07.299 fused_ordering(923) 00:19:07.299 fused_ordering(924) 00:19:07.299 fused_ordering(925) 00:19:07.299 fused_ordering(926) 00:19:07.299 fused_ordering(927) 00:19:07.299 fused_ordering(928) 00:19:07.299 fused_ordering(929) 00:19:07.299 fused_ordering(930) 00:19:07.299 fused_ordering(931) 00:19:07.300 fused_ordering(932) 00:19:07.300 fused_ordering(933) 00:19:07.300 fused_ordering(934) 00:19:07.300 fused_ordering(935) 00:19:07.300 fused_ordering(936) 00:19:07.300 fused_ordering(937) 00:19:07.300 fused_ordering(938) 00:19:07.300 fused_ordering(939) 00:19:07.300 fused_ordering(940) 00:19:07.300 fused_ordering(941) 00:19:07.300 fused_ordering(942) 00:19:07.300 fused_ordering(943) 00:19:07.300 fused_ordering(944) 00:19:07.300 fused_ordering(945) 00:19:07.300 fused_ordering(946) 00:19:07.300 fused_ordering(947) 00:19:07.300 fused_ordering(948) 00:19:07.300 fused_ordering(949) 00:19:07.300 fused_ordering(950) 00:19:07.300 fused_ordering(951) 00:19:07.300 fused_ordering(952) 00:19:07.300 fused_ordering(953) 00:19:07.300 fused_ordering(954) 00:19:07.300 fused_ordering(955) 00:19:07.300 fused_ordering(956) 00:19:07.300 fused_ordering(957) 00:19:07.300 fused_ordering(958) 00:19:07.300 fused_ordering(959) 00:19:07.300 fused_ordering(960) 00:19:07.300 fused_ordering(961) 00:19:07.300 fused_ordering(962) 00:19:07.300 fused_ordering(963) 00:19:07.300 fused_ordering(964) 00:19:07.300 fused_ordering(965) 00:19:07.300 fused_ordering(966) 00:19:07.300 fused_ordering(967) 00:19:07.300 fused_ordering(968) 00:19:07.300 fused_ordering(969) 00:19:07.300 fused_ordering(970) 00:19:07.300 fused_ordering(971) 00:19:07.300 fused_ordering(972) 00:19:07.300 fused_ordering(973) 00:19:07.300 fused_ordering(974) 00:19:07.300 fused_ordering(975) 00:19:07.300 fused_ordering(976) 00:19:07.300 fused_ordering(977) 00:19:07.300 fused_ordering(978) 00:19:07.300 fused_ordering(979) 00:19:07.300 fused_ordering(980) 00:19:07.300 fused_ordering(981) 00:19:07.300 fused_ordering(982) 00:19:07.300 fused_ordering(983) 00:19:07.300 fused_ordering(984) 00:19:07.300 fused_ordering(985) 00:19:07.300 fused_ordering(986) 00:19:07.300 fused_ordering(987) 00:19:07.300 fused_ordering(988) 00:19:07.300 fused_ordering(989) 00:19:07.300 fused_ordering(990) 00:19:07.300 fused_ordering(991) 00:19:07.300 fused_ordering(992) 00:19:07.300 fused_ordering(993) 00:19:07.300 fused_ordering(994) 00:19:07.300 fused_ordering(995) 00:19:07.300 fused_ordering(996) 00:19:07.300 fused_ordering(997) 00:19:07.300 fused_ordering(998) 00:19:07.300 fused_ordering(999) 00:19:07.300 fused_ordering(1000) 00:19:07.300 fused_ordering(1001) 00:19:07.300 fused_ordering(1002) 00:19:07.300 fused_ordering(1003) 00:19:07.300 fused_ordering(1004) 00:19:07.300 fused_ordering(1005) 00:19:07.300 fused_ordering(1006) 00:19:07.300 fused_ordering(1007) 00:19:07.300 fused_ordering(1008) 00:19:07.300 fused_ordering(1009) 00:19:07.300 fused_ordering(1010) 00:19:07.300 fused_ordering(1011) 00:19:07.300 fused_ordering(1012) 00:19:07.300 fused_ordering(1013) 00:19:07.300 fused_ordering(1014) 00:19:07.300 fused_ordering(1015) 00:19:07.300 fused_ordering(1016) 00:19:07.300 fused_ordering(1017) 00:19:07.300 fused_ordering(1018) 00:19:07.300 fused_ordering(1019) 00:19:07.300 fused_ordering(1020) 00:19:07.300 fused_ordering(1021) 00:19:07.300 fused_ordering(1022) 00:19:07.300 fused_ordering(1023) 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:07.300 rmmod nvme_rdma 00:19:07.300 rmmod nvme_fabrics 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2284377 ']' 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2284377 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2284377 ']' 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2284377 00:19:07.300 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2284377 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2284377' 00:19:07.560 killing process with pid 2284377 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2284377 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2284377 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:07.560 00:19:07.560 real 0m8.673s 00:19:07.560 user 0m4.042s 00:19:07.560 sys 0m5.877s 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:07.560 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.560 ************************************ 00:19:07.560 END TEST nvmf_fused_ordering 00:19:07.560 ************************************ 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.820 ************************************ 00:19:07.820 START TEST nvmf_ns_masking 00:19:07.820 ************************************ 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:19:07.820 * Looking for test storage... 00:19:07.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.820 --rc genhtml_branch_coverage=1 00:19:07.820 --rc genhtml_function_coverage=1 00:19:07.820 --rc genhtml_legend=1 00:19:07.820 --rc geninfo_all_blocks=1 00:19:07.820 --rc geninfo_unexecuted_blocks=1 00:19:07.820 00:19:07.820 ' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.820 --rc genhtml_branch_coverage=1 00:19:07.820 --rc genhtml_function_coverage=1 00:19:07.820 --rc genhtml_legend=1 00:19:07.820 --rc geninfo_all_blocks=1 00:19:07.820 --rc geninfo_unexecuted_blocks=1 00:19:07.820 00:19:07.820 ' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.820 --rc genhtml_branch_coverage=1 00:19:07.820 --rc genhtml_function_coverage=1 00:19:07.820 --rc genhtml_legend=1 00:19:07.820 --rc geninfo_all_blocks=1 00:19:07.820 --rc geninfo_unexecuted_blocks=1 00:19:07.820 00:19:07.820 ' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:07.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.820 --rc genhtml_branch_coverage=1 00:19:07.820 --rc genhtml_function_coverage=1 00:19:07.820 --rc genhtml_legend=1 00:19:07.820 --rc geninfo_all_blocks=1 00:19:07.820 --rc geninfo_unexecuted_blocks=1 00:19:07.820 00:19:07.820 ' 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.820 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.080 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:08.080 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:08.080 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.080 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.080 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.081 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b9c516c0-f29e-4f73-9a59-6489ae36e9d8 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7e511899-1a09-4964-8eb2-3794c9eb86a3 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f6ba18b1-6dfb-40cd-8293-b72742d6eb0b 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:08.081 15:37:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.805 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:14.806 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:14.806 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:14.806 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:14.806 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:14.806 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:14.806 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:14.806 altname enp217s0f0np0 00:19:14.806 altname ens818f0np0 00:19:14.806 inet 192.168.100.8/24 scope global mlx_0_0 00:19:14.806 valid_lft forever preferred_lft forever 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:14.806 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:14.806 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:14.806 altname enp217s0f1np1 00:19:14.806 altname ens818f1np1 00:19:14.806 inet 192.168.100.9/24 scope global mlx_0_1 00:19:14.806 valid_lft forever preferred_lft forever 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:14.806 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:14.807 192.168.100.9' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:14.807 192.168.100.9' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:14.807 192.168.100.9' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2288015 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2288015 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2288015 ']' 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.807 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:15.066 [2024-11-03 15:37:52.598326] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:19:15.066 [2024-11-03 15:37:52.598380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.066 [2024-11-03 15:37:52.676953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.066 [2024-11-03 15:37:52.697571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.066 [2024-11-03 15:37:52.697608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.066 [2024-11-03 15:37:52.697617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.066 [2024-11-03 15:37:52.697626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.066 [2024-11-03 15:37:52.697633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.066 [2024-11-03 15:37:52.698242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.066 15:37:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:15.326 [2024-11-03 15:37:53.018835] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8bec50/0x8c3100) succeed. 00:19:15.326 [2024-11-03 15:37:53.027785] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8c00b0/0x9047a0) succeed. 00:19:15.326 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:15.326 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:15.326 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:15.585 Malloc1 00:19:15.585 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:15.844 Malloc2 00:19:15.844 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:16.104 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:16.104 15:37:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:16.363 [2024-11-03 15:37:54.021972] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:16.363 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:16.363 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6ba18b1-6dfb-40cd-8293-b72742d6eb0b -a 192.168.100.8 -s 4420 -i 4 00:19:16.622 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:16.622 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:16.622 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.622 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:16.622 15:37:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:19.157 [ 0]:0x1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e70528b0732f4b34b6e5bbffc233df77 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e70528b0732f4b34b6e5bbffc233df77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.157 [ 0]:0x1 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e70528b0732f4b34b6e5bbffc233df77 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e70528b0732f4b34b6e5bbffc233df77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:19.157 [ 1]:0x2 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:19.157 15:37:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:19.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.416 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.675 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:19.934 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:19.934 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6ba18b1-6dfb-40cd-8293-b72742d6eb0b -a 192.168.100.8 -s 4420 -i 4 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:19:20.194 15:37:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:22.099 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.357 [ 0]:0x2 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.357 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.358 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:22.358 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.358 15:37:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.616 [ 0]:0x1 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e70528b0732f4b34b6e5bbffc233df77 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e70528b0732f4b34b6e5bbffc233df77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.616 [ 1]:0x2 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.616 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:22.875 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.876 [ 0]:0x2 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:22.876 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:23.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.172 15:38:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:23.451 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:23.451 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f6ba18b1-6dfb-40cd-8293-b72742d6eb0b -a 192.168.100.8 -s 4420 -i 4 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:23.711 15:38:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.246 [ 0]:0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e70528b0732f4b34b6e5bbffc233df77 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e70528b0732f4b34b6e5bbffc233df77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:26.246 [ 1]:0x2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:26.246 [ 0]:0x2 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:26.246 15:38:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:26.505 [2024-11-03 15:38:04.077427] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:26.505 request: 00:19:26.505 { 00:19:26.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.505 "nsid": 2, 00:19:26.505 "host": "nqn.2016-06.io.spdk:host1", 00:19:26.505 "method": "nvmf_ns_remove_host", 00:19:26.505 "req_id": 1 00:19:26.505 } 00:19:26.505 Got JSON-RPC error response 00:19:26.505 response: 00:19:26.505 { 00:19:26.505 "code": -32602, 00:19:26.505 "message": "Invalid parameters" 00:19:26.505 } 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.505 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.506 [ 0]:0x2 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4365a10c14fd4893989eb60a91246e77 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4365a10c14fd4893989eb60a91246e77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:26.506 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:26.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2290120 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2290120 /var/tmp/host.sock 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2290120 ']' 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:26.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.765 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 [2024-11-03 15:38:04.579716] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:19:27.024 [2024-11-03 15:38:04.579768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290120 ] 00:19:27.024 [2024-11-03 15:38:04.655119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.024 [2024-11-03 15:38:04.677377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.283 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.283 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:27.283 15:38:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.283 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:27.541 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b9c516c0-f29e-4f73-9a59-6489ae36e9d8 00:19:27.541 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:27.541 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B9C516C0F29E4F739A596489AE36E9D8 -i 00:19:27.800 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7e511899-1a09-4964-8eb2-3794c9eb86a3 00:19:27.800 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:27.800 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7E5118991A0949648EB23794C9EB86A3 -i 00:19:28.059 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:28.059 15:38:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:28.318 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:28.318 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:28.578 nvme0n1 00:19:28.578 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:28.578 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:28.837 nvme1n2 00:19:28.837 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:28.837 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:28.837 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:28.837 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:28.837 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:29.096 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:29.096 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:29.096 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:29.096 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:29.356 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b9c516c0-f29e-4f73-9a59-6489ae36e9d8 == \b\9\c\5\1\6\c\0\-\f\2\9\e\-\4\f\7\3\-\9\a\5\9\-\6\4\8\9\a\e\3\6\e\9\d\8 ]] 00:19:29.356 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:29.356 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:29.356 15:38:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:29.356 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7e511899-1a09-4964-8eb2-3794c9eb86a3 == \7\e\5\1\1\8\9\9\-\1\a\0\9\-\4\9\6\4\-\8\e\b\2\-\3\7\9\4\c\9\e\b\8\6\a\3 ]] 00:19:29.356 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.615 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b9c516c0-f29e-4f73-9a59-6489ae36e9d8 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B9C516C0F29E4F739A596489AE36E9D8 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B9C516C0F29E4F739A596489AE36E9D8 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:29.874 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B9C516C0F29E4F739A596489AE36E9D8 00:19:29.874 [2024-11-03 15:38:07.656753] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:29.874 [2024-11-03 15:38:07.656787] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:29.874 [2024-11-03 15:38:07.656798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.874 request: 00:19:29.874 { 00:19:29.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.874 "namespace": { 00:19:29.874 "bdev_name": "invalid", 00:19:29.874 "nsid": 1, 00:19:29.874 "nguid": "B9C516C0F29E4F739A596489AE36E9D8", 00:19:29.874 "no_auto_visible": false 00:19:29.874 }, 00:19:29.874 "method": "nvmf_subsystem_add_ns", 00:19:29.874 "req_id": 1 00:19:29.874 } 00:19:29.874 Got JSON-RPC error response 00:19:29.874 response: 00:19:29.874 { 00:19:29.874 "code": -32602, 00:19:29.874 "message": "Invalid parameters" 00:19:29.874 } 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b9c516c0-f29e-4f73-9a59-6489ae36e9d8 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B9C516C0F29E4F739A596489AE36E9D8 -i 00:19:30.134 15:38:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:32.671 15:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:32.671 15:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:32.671 15:38:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2290120 ']' 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2290120' 00:19:32.671 killing process with pid 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2290120 00:19:32.671 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:32.931 rmmod nvme_rdma 00:19:32.931 rmmod nvme_fabrics 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2288015 ']' 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2288015 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2288015 ']' 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2288015 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.931 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2288015 00:19:33.190 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.190 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.190 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2288015' 00:19:33.190 killing process with pid 2288015 00:19:33.190 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2288015 00:19:33.190 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2288015 00:19:33.450 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:33.450 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:33.450 00:19:33.450 real 0m25.582s 00:19:33.450 user 0m31.640s 00:19:33.450 sys 0m7.762s 00:19:33.450 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.450 15:38:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:33.450 ************************************ 00:19:33.450 END TEST nvmf_ns_masking 00:19:33.450 ************************************ 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.450 ************************************ 00:19:33.450 START TEST nvmf_nvme_cli 00:19:33.450 ************************************ 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:33.450 * Looking for test storage... 00:19:33.450 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:33.450 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:33.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.710 --rc genhtml_branch_coverage=1 00:19:33.710 --rc genhtml_function_coverage=1 00:19:33.710 --rc genhtml_legend=1 00:19:33.710 --rc geninfo_all_blocks=1 00:19:33.710 --rc geninfo_unexecuted_blocks=1 00:19:33.710 00:19:33.710 ' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:33.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.710 --rc genhtml_branch_coverage=1 00:19:33.710 --rc genhtml_function_coverage=1 00:19:33.710 --rc genhtml_legend=1 00:19:33.710 --rc geninfo_all_blocks=1 00:19:33.710 --rc geninfo_unexecuted_blocks=1 00:19:33.710 00:19:33.710 ' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:33.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.710 --rc genhtml_branch_coverage=1 00:19:33.710 --rc genhtml_function_coverage=1 00:19:33.710 --rc genhtml_legend=1 00:19:33.710 --rc geninfo_all_blocks=1 00:19:33.710 --rc geninfo_unexecuted_blocks=1 00:19:33.710 00:19:33.710 ' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:33.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.710 --rc genhtml_branch_coverage=1 00:19:33.710 --rc genhtml_function_coverage=1 00:19:33.710 --rc genhtml_legend=1 00:19:33.710 --rc geninfo_all_blocks=1 00:19:33.710 --rc geninfo_unexecuted_blocks=1 00:19:33.710 00:19:33.710 ' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:33.710 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.711 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.711 15:38:11 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.288 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:40.289 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:40.289 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:40.289 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:40.289 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:40.289 15:38:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:40.289 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:40.549 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.549 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:40.549 altname enp217s0f0np0 00:19:40.549 altname ens818f0np0 00:19:40.549 inet 192.168.100.8/24 scope global mlx_0_0 00:19:40.549 valid_lft forever preferred_lft forever 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:40.549 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:40.550 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.550 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:40.550 altname enp217s0f1np1 00:19:40.550 altname ens818f1np1 00:19:40.550 inet 192.168.100.9/24 scope global mlx_0_1 00:19:40.550 valid_lft forever preferred_lft forever 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:40.550 192.168.100.9' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:40.550 192.168.100.9' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:40.550 192.168.100.9' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2294663 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2294663 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2294663 ']' 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.550 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.550 [2024-11-03 15:38:18.273492] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:19:40.550 [2024-11-03 15:38:18.273540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.811 [2024-11-03 15:38:18.350774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.811 [2024-11-03 15:38:18.374781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.811 [2024-11-03 15:38:18.374824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.811 [2024-11-03 15:38:18.374835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.811 [2024-11-03 15:38:18.374843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.811 [2024-11-03 15:38:18.374850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.811 [2024-11-03 15:38:18.376609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.811 [2024-11-03 15:38:18.376702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.811 [2024-11-03 15:38:18.376794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.811 [2024-11-03 15:38:18.376796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.811 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.811 [2024-11-03 15:38:18.549370] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfd6c50/0xfdb100) succeed. 00:19:40.811 [2024-11-03 15:38:18.558397] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfd8290/0x101c7a0) succeed. 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 Malloc0 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 Malloc1 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 [2024-11-03 15:38:18.770698] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.071 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:19:41.331 00:19:41.331 Discovery Log Number of Records 2, Generation counter 2 00:19:41.331 =====Discovery Log Entry 0====== 00:19:41.331 trtype: rdma 00:19:41.331 adrfam: ipv4 00:19:41.331 subtype: current discovery subsystem 00:19:41.331 treq: not required 00:19:41.331 portid: 0 00:19:41.331 trsvcid: 4420 00:19:41.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:41.331 traddr: 192.168.100.8 00:19:41.331 eflags: explicit discovery connections, duplicate discovery information 00:19:41.331 rdma_prtype: not specified 00:19:41.331 rdma_qptype: connected 00:19:41.331 rdma_cms: rdma-cm 00:19:41.331 rdma_pkey: 0x0000 00:19:41.331 =====Discovery Log Entry 1====== 00:19:41.331 trtype: rdma 00:19:41.331 adrfam: ipv4 00:19:41.331 subtype: nvme subsystem 00:19:41.331 treq: not required 00:19:41.331 portid: 0 00:19:41.331 trsvcid: 4420 00:19:41.331 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:41.331 traddr: 192.168.100.8 00:19:41.331 eflags: none 00:19:41.331 rdma_prtype: not specified 00:19:41.331 rdma_qptype: connected 00:19:41.331 rdma_cms: rdma-cm 00:19:41.331 rdma_pkey: 0x0000 00:19:41.331 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:41.331 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:41.331 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:41.332 15:38:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:42.269 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:42.269 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:19:42.270 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.270 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:42.270 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:42.270 15:38:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.181 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:44.182 /dev/nvme0n2 ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:44.182 15:38:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:45.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.560 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:45.560 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:45.561 15:38:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:45.561 rmmod nvme_rdma 00:19:45.561 rmmod nvme_fabrics 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2294663 ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2294663 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2294663 ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2294663 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2294663 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2294663' 00:19:45.561 killing process with pid 2294663 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2294663 00:19:45.561 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2294663 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:45.821 00:19:45.821 real 0m12.317s 00:19:45.821 user 0m21.791s 00:19:45.821 sys 0m5.972s 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:45.821 ************************************ 00:19:45.821 END TEST nvmf_nvme_cli 00:19:45.821 ************************************ 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.821 ************************************ 00:19:45.821 START TEST nvmf_auth_target 00:19:45.821 ************************************ 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:45.821 * Looking for test storage... 00:19:45.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:45.821 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.082 --rc genhtml_branch_coverage=1 00:19:46.082 --rc genhtml_function_coverage=1 00:19:46.082 --rc genhtml_legend=1 00:19:46.082 --rc geninfo_all_blocks=1 00:19:46.082 --rc geninfo_unexecuted_blocks=1 00:19:46.082 00:19:46.082 ' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.082 --rc genhtml_branch_coverage=1 00:19:46.082 --rc genhtml_function_coverage=1 00:19:46.082 --rc genhtml_legend=1 00:19:46.082 --rc geninfo_all_blocks=1 00:19:46.082 --rc geninfo_unexecuted_blocks=1 00:19:46.082 00:19:46.082 ' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.082 --rc genhtml_branch_coverage=1 00:19:46.082 --rc genhtml_function_coverage=1 00:19:46.082 --rc genhtml_legend=1 00:19:46.082 --rc geninfo_all_blocks=1 00:19:46.082 --rc geninfo_unexecuted_blocks=1 00:19:46.082 00:19:46.082 ' 00:19:46.082 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.082 --rc genhtml_branch_coverage=1 00:19:46.082 --rc genhtml_function_coverage=1 00:19:46.082 --rc genhtml_legend=1 00:19:46.082 --rc geninfo_all_blocks=1 00:19:46.082 --rc geninfo_unexecuted_blocks=1 00:19:46.082 00:19:46.082 ' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.083 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:46.083 15:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:52.659 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:52.659 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:52.659 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:52.659 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:52.659 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:52.660 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.660 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:52.660 altname enp217s0f0np0 00:19:52.660 altname ens818f0np0 00:19:52.660 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.660 valid_lft forever preferred_lft forever 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:52.660 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.660 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:52.660 altname enp217s0f1np1 00:19:52.660 altname ens818f1np1 00:19:52.660 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.660 valid_lft forever preferred_lft forever 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.660 192.168.100.9' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:52.660 192.168.100.9' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:52.660 192.168.100.9' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2298927 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2298927 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2298927 ']' 00:19:52.660 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.661 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.661 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.661 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.661 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2298953 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:52.920 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2f20884bbd1f37496c2e8f3d043240f1dbd26d7370a05fd8 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yjG 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2f20884bbd1f37496c2e8f3d043240f1dbd26d7370a05fd8 0 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2f20884bbd1f37496c2e8f3d043240f1dbd26d7370a05fd8 0 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2f20884bbd1f37496c2e8f3d043240f1dbd26d7370a05fd8 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yjG 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yjG 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.yjG 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=956da2b4e772778f1d5d3e8b89a736ee7de67bdc2e3fa9a57888bf9e6b928b25 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CyA 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 956da2b4e772778f1d5d3e8b89a736ee7de67bdc2e3fa9a57888bf9e6b928b25 3 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 956da2b4e772778f1d5d3e8b89a736ee7de67bdc2e3fa9a57888bf9e6b928b25 3 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=956da2b4e772778f1d5d3e8b89a736ee7de67bdc2e3fa9a57888bf9e6b928b25 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:52.921 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CyA 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CyA 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CyA 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eecb202651641b9d84d7687f31a53b76 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dMD 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eecb202651641b9d84d7687f31a53b76 1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eecb202651641b9d84d7687f31a53b76 1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eecb202651641b9d84d7687f31a53b76 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dMD 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dMD 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dMD 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf777513fdc4b7a32e5dcfd711224a988db203a06aedc38b 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NN6 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf777513fdc4b7a32e5dcfd711224a988db203a06aedc38b 2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf777513fdc4b7a32e5dcfd711224a988db203a06aedc38b 2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf777513fdc4b7a32e5dcfd711224a988db203a06aedc38b 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NN6 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NN6 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.NN6 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5eb43966c6375b4de96311395d32f5e63f490225c3926b6d 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1SL 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5eb43966c6375b4de96311395d32f5e63f490225c3926b6d 2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5eb43966c6375b4de96311395d32f5e63f490225c3926b6d 2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5eb43966c6375b4de96311395d32f5e63f490225c3926b6d 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1SL 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1SL 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.1SL 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:53.181 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df4875eed1b72ba015f2cb4743cadd60 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WdO 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df4875eed1b72ba015f2cb4743cadd60 1 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df4875eed1b72ba015f2cb4743cadd60 1 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df4875eed1b72ba015f2cb4743cadd60 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:53.182 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.441 15:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WdO 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WdO 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.WdO 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1cc9bc1e50b81063b9e9115f49689b75a0087101c8c9379097e8b308fb304761 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gsg 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1cc9bc1e50b81063b9e9115f49689b75a0087101c8c9379097e8b308fb304761 3 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1cc9bc1e50b81063b9e9115f49689b75a0087101c8c9379097e8b308fb304761 3 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1cc9bc1e50b81063b9e9115f49689b75a0087101c8c9379097e8b308fb304761 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gsg 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gsg 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gsg 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2298927 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2298927 ']' 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.441 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2298953 /var/tmp/host.sock 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2298953 ']' 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:53.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.700 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yjG 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yjG 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yjG 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CyA ]] 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CyA 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CyA 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CyA 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dMD 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dMD 00:19:54.219 15:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dMD 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.NN6 ]] 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NN6 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NN6 00:19:54.478 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NN6 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1SL 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1SL 00:19:54.737 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1SL 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.WdO ]] 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WdO 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WdO 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WdO 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gsg 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gsg 00:19:54.997 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gsg 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.256 15:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.516 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.775 00:19:55.775 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.775 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.775 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.034 { 00:19:56.034 "cntlid": 1, 00:19:56.034 "qid": 0, 00:19:56.034 "state": "enabled", 00:19:56.034 "thread": "nvmf_tgt_poll_group_000", 00:19:56.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:56.034 "listen_address": { 00:19:56.034 "trtype": "RDMA", 00:19:56.034 "adrfam": "IPv4", 00:19:56.034 "traddr": "192.168.100.8", 00:19:56.034 "trsvcid": "4420" 00:19:56.034 }, 00:19:56.034 "peer_address": { 00:19:56.034 "trtype": "RDMA", 00:19:56.034 "adrfam": "IPv4", 00:19:56.034 "traddr": "192.168.100.8", 00:19:56.034 "trsvcid": "49191" 00:19:56.034 }, 00:19:56.034 "auth": { 00:19:56.034 "state": "completed", 00:19:56.034 "digest": "sha256", 00:19:56.034 "dhgroup": "null" 00:19:56.034 } 00:19:56.034 } 00:19:56.034 ]' 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.034 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.293 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:19:56.294 15:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:19:56.861 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.120 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.380 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.380 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.380 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.380 15:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.380 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.639 { 00:19:57.639 "cntlid": 3, 00:19:57.639 "qid": 0, 00:19:57.639 "state": "enabled", 00:19:57.639 "thread": "nvmf_tgt_poll_group_000", 00:19:57.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:57.639 "listen_address": { 00:19:57.639 "trtype": "RDMA", 00:19:57.639 "adrfam": "IPv4", 00:19:57.639 "traddr": "192.168.100.8", 00:19:57.639 "trsvcid": "4420" 00:19:57.639 }, 00:19:57.639 "peer_address": { 00:19:57.639 "trtype": "RDMA", 00:19:57.639 "adrfam": "IPv4", 00:19:57.639 "traddr": "192.168.100.8", 00:19:57.639 "trsvcid": "38565" 00:19:57.639 }, 00:19:57.639 "auth": { 00:19:57.639 "state": "completed", 00:19:57.639 "digest": "sha256", 00:19:57.639 "dhgroup": "null" 00:19:57.639 } 00:19:57.639 } 00:19:57.639 ]' 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.639 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.898 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:57.898 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.898 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.898 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.898 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.158 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:19:58.158 15:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.726 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.986 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.245 00:19:59.245 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.245 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.245 15:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.504 { 00:19:59.504 "cntlid": 5, 00:19:59.504 "qid": 0, 00:19:59.504 "state": "enabled", 00:19:59.504 "thread": "nvmf_tgt_poll_group_000", 00:19:59.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:59.504 "listen_address": { 00:19:59.504 "trtype": "RDMA", 00:19:59.504 "adrfam": "IPv4", 00:19:59.504 "traddr": "192.168.100.8", 00:19:59.504 "trsvcid": "4420" 00:19:59.504 }, 00:19:59.504 "peer_address": { 00:19:59.504 "trtype": "RDMA", 00:19:59.504 "adrfam": "IPv4", 00:19:59.504 "traddr": "192.168.100.8", 00:19:59.504 "trsvcid": "57074" 00:19:59.504 }, 00:19:59.504 "auth": { 00:19:59.504 "state": "completed", 00:19:59.504 "digest": "sha256", 00:19:59.504 "dhgroup": "null" 00:19:59.504 } 00:19:59.504 } 00:19:59.504 ]' 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.504 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.764 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:19:59.764 15:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:00.332 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.591 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:00.591 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.592 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.851 00:20:00.851 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.851 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.851 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.110 { 00:20:01.110 "cntlid": 7, 00:20:01.110 "qid": 0, 00:20:01.110 "state": "enabled", 00:20:01.110 "thread": "nvmf_tgt_poll_group_000", 00:20:01.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:01.110 "listen_address": { 00:20:01.110 "trtype": "RDMA", 00:20:01.110 "adrfam": "IPv4", 00:20:01.110 "traddr": "192.168.100.8", 00:20:01.110 "trsvcid": "4420" 00:20:01.110 }, 00:20:01.110 "peer_address": { 00:20:01.110 "trtype": "RDMA", 00:20:01.110 "adrfam": "IPv4", 00:20:01.110 "traddr": "192.168.100.8", 00:20:01.110 "trsvcid": "58571" 00:20:01.110 }, 00:20:01.110 "auth": { 00:20:01.110 "state": "completed", 00:20:01.110 "digest": "sha256", 00:20:01.110 "dhgroup": "null" 00:20:01.110 } 00:20:01.110 } 00:20:01.110 ]' 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.110 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.111 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.111 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.111 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.370 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.370 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.370 15:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.370 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:01.370 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:01.937 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.196 15:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.456 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.715 00:20:02.715 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.715 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.715 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.041 { 00:20:03.041 "cntlid": 9, 00:20:03.041 "qid": 0, 00:20:03.041 "state": "enabled", 00:20:03.041 "thread": "nvmf_tgt_poll_group_000", 00:20:03.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:03.041 "listen_address": { 00:20:03.041 "trtype": "RDMA", 00:20:03.041 "adrfam": "IPv4", 00:20:03.041 "traddr": "192.168.100.8", 00:20:03.041 "trsvcid": "4420" 00:20:03.041 }, 00:20:03.041 "peer_address": { 00:20:03.041 "trtype": "RDMA", 00:20:03.041 "adrfam": "IPv4", 00:20:03.041 "traddr": "192.168.100.8", 00:20:03.041 "trsvcid": "48256" 00:20:03.041 }, 00:20:03.041 "auth": { 00:20:03.041 "state": "completed", 00:20:03.041 "digest": "sha256", 00:20:03.041 "dhgroup": "ffdhe2048" 00:20:03.041 } 00:20:03.041 } 00:20:03.041 ]' 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.041 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.343 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:03.343 15:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.911 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.171 15:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.430 00:20:04.430 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.430 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.430 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.690 { 00:20:04.690 "cntlid": 11, 00:20:04.690 "qid": 0, 00:20:04.690 "state": "enabled", 00:20:04.690 "thread": "nvmf_tgt_poll_group_000", 00:20:04.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:04.690 "listen_address": { 00:20:04.690 "trtype": "RDMA", 00:20:04.690 "adrfam": "IPv4", 00:20:04.690 "traddr": "192.168.100.8", 00:20:04.690 "trsvcid": "4420" 00:20:04.690 }, 00:20:04.690 "peer_address": { 00:20:04.690 "trtype": "RDMA", 00:20:04.690 "adrfam": "IPv4", 00:20:04.690 "traddr": "192.168.100.8", 00:20:04.690 "trsvcid": "56811" 00:20:04.690 }, 00:20:04.690 "auth": { 00:20:04.690 "state": "completed", 00:20:04.690 "digest": "sha256", 00:20:04.690 "dhgroup": "ffdhe2048" 00:20:04.690 } 00:20:04.690 } 00:20:04.690 ]' 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.690 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.950 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:04.950 15:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:05.518 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.518 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:05.518 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.518 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.814 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.814 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.814 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.815 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.074 00:20:06.074 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.074 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.074 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.334 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.334 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.334 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.334 15:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.334 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.334 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.334 { 00:20:06.334 "cntlid": 13, 00:20:06.334 "qid": 0, 00:20:06.334 "state": "enabled", 00:20:06.334 "thread": "nvmf_tgt_poll_group_000", 00:20:06.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:06.334 "listen_address": { 00:20:06.334 "trtype": "RDMA", 00:20:06.334 "adrfam": "IPv4", 00:20:06.334 "traddr": "192.168.100.8", 00:20:06.334 "trsvcid": "4420" 00:20:06.334 }, 00:20:06.334 "peer_address": { 00:20:06.334 "trtype": "RDMA", 00:20:06.334 "adrfam": "IPv4", 00:20:06.334 "traddr": "192.168.100.8", 00:20:06.334 "trsvcid": "33782" 00:20:06.334 }, 00:20:06.334 "auth": { 00:20:06.334 "state": "completed", 00:20:06.334 "digest": "sha256", 00:20:06.334 "dhgroup": "ffdhe2048" 00:20:06.334 } 00:20:06.334 } 00:20:06.334 ]' 00:20:06.334 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.334 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.334 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.335 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.335 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.335 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.335 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.335 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.594 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:06.594 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:07.162 15:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.422 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.681 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.941 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.941 { 00:20:07.941 "cntlid": 15, 00:20:07.941 "qid": 0, 00:20:07.941 "state": "enabled", 00:20:07.941 "thread": "nvmf_tgt_poll_group_000", 00:20:07.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:07.941 "listen_address": { 00:20:07.941 "trtype": "RDMA", 00:20:07.941 "adrfam": "IPv4", 00:20:07.941 "traddr": "192.168.100.8", 00:20:07.941 "trsvcid": "4420" 00:20:07.941 }, 00:20:07.941 "peer_address": { 00:20:07.941 "trtype": "RDMA", 00:20:07.941 "adrfam": "IPv4", 00:20:07.941 "traddr": "192.168.100.8", 00:20:07.941 "trsvcid": "48222" 00:20:07.941 }, 00:20:07.941 "auth": { 00:20:07.941 "state": "completed", 00:20:07.941 "digest": "sha256", 00:20:07.941 "dhgroup": "ffdhe2048" 00:20:07.941 } 00:20:07.941 } 00:20:07.941 ]' 00:20:07.941 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.201 15:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.460 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:08.460 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.028 15:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.288 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.548 00:20:09.548 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.548 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.548 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.807 { 00:20:09.807 "cntlid": 17, 00:20:09.807 "qid": 0, 00:20:09.807 "state": "enabled", 00:20:09.807 "thread": "nvmf_tgt_poll_group_000", 00:20:09.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:09.807 "listen_address": { 00:20:09.807 "trtype": "RDMA", 00:20:09.807 "adrfam": "IPv4", 00:20:09.807 "traddr": "192.168.100.8", 00:20:09.807 "trsvcid": "4420" 00:20:09.807 }, 00:20:09.807 "peer_address": { 00:20:09.807 "trtype": "RDMA", 00:20:09.807 "adrfam": "IPv4", 00:20:09.807 "traddr": "192.168.100.8", 00:20:09.807 "trsvcid": "47991" 00:20:09.807 }, 00:20:09.807 "auth": { 00:20:09.807 "state": "completed", 00:20:09.807 "digest": "sha256", 00:20:09.807 "dhgroup": "ffdhe3072" 00:20:09.807 } 00:20:09.807 } 00:20:09.807 ]' 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.807 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.066 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.066 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.066 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:10.066 15:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:10.637 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:10.896 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.156 15:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.416 00:20:11.416 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.416 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.416 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.675 { 00:20:11.675 "cntlid": 19, 00:20:11.675 "qid": 0, 00:20:11.675 "state": "enabled", 00:20:11.675 "thread": "nvmf_tgt_poll_group_000", 00:20:11.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:11.675 "listen_address": { 00:20:11.675 "trtype": "RDMA", 00:20:11.675 "adrfam": "IPv4", 00:20:11.675 "traddr": "192.168.100.8", 00:20:11.675 "trsvcid": "4420" 00:20:11.675 }, 00:20:11.675 "peer_address": { 00:20:11.675 "trtype": "RDMA", 00:20:11.675 "adrfam": "IPv4", 00:20:11.675 "traddr": "192.168.100.8", 00:20:11.675 "trsvcid": "39820" 00:20:11.675 }, 00:20:11.675 "auth": { 00:20:11.675 "state": "completed", 00:20:11.675 "digest": "sha256", 00:20:11.675 "dhgroup": "ffdhe3072" 00:20:11.675 } 00:20:11.675 } 00:20:11.675 ]' 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.675 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.935 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:11.935 15:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.504 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.763 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.023 00:20:13.023 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.023 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.023 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.282 { 00:20:13.282 "cntlid": 21, 00:20:13.282 "qid": 0, 00:20:13.282 "state": "enabled", 00:20:13.282 "thread": "nvmf_tgt_poll_group_000", 00:20:13.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:13.282 "listen_address": { 00:20:13.282 "trtype": "RDMA", 00:20:13.282 "adrfam": "IPv4", 00:20:13.282 "traddr": "192.168.100.8", 00:20:13.282 "trsvcid": "4420" 00:20:13.282 }, 00:20:13.282 "peer_address": { 00:20:13.282 "trtype": "RDMA", 00:20:13.282 "adrfam": "IPv4", 00:20:13.282 "traddr": "192.168.100.8", 00:20:13.282 "trsvcid": "49423" 00:20:13.282 }, 00:20:13.282 "auth": { 00:20:13.282 "state": "completed", 00:20:13.282 "digest": "sha256", 00:20:13.282 "dhgroup": "ffdhe3072" 00:20:13.282 } 00:20:13.282 } 00:20:13.282 ]' 00:20:13.282 15:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.282 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.282 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:13.542 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:14.480 15:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.480 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.740 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.001 { 00:20:15.001 "cntlid": 23, 00:20:15.001 "qid": 0, 00:20:15.001 "state": "enabled", 00:20:15.001 "thread": "nvmf_tgt_poll_group_000", 00:20:15.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:15.001 "listen_address": { 00:20:15.001 "trtype": "RDMA", 00:20:15.001 "adrfam": "IPv4", 00:20:15.001 "traddr": "192.168.100.8", 00:20:15.001 "trsvcid": "4420" 00:20:15.001 }, 00:20:15.001 "peer_address": { 00:20:15.001 "trtype": "RDMA", 00:20:15.001 "adrfam": "IPv4", 00:20:15.001 "traddr": "192.168.100.8", 00:20:15.001 "trsvcid": "47124" 00:20:15.001 }, 00:20:15.001 "auth": { 00:20:15.001 "state": "completed", 00:20:15.001 "digest": "sha256", 00:20:15.001 "dhgroup": "ffdhe3072" 00:20:15.001 } 00:20:15.001 } 00:20:15.001 ]' 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.001 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.261 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.261 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.261 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.261 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.261 15:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.521 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:15.521 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.090 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.350 15:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.610 00:20:16.610 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.610 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.610 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.869 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.869 { 00:20:16.869 "cntlid": 25, 00:20:16.869 "qid": 0, 00:20:16.869 "state": "enabled", 00:20:16.869 "thread": "nvmf_tgt_poll_group_000", 00:20:16.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:16.869 "listen_address": { 00:20:16.869 "trtype": "RDMA", 00:20:16.869 "adrfam": "IPv4", 00:20:16.869 "traddr": "192.168.100.8", 00:20:16.869 "trsvcid": "4420" 00:20:16.869 }, 00:20:16.869 "peer_address": { 00:20:16.869 "trtype": "RDMA", 00:20:16.869 "adrfam": "IPv4", 00:20:16.869 "traddr": "192.168.100.8", 00:20:16.869 "trsvcid": "36054" 00:20:16.869 }, 00:20:16.869 "auth": { 00:20:16.869 "state": "completed", 00:20:16.869 "digest": "sha256", 00:20:16.869 "dhgroup": "ffdhe4096" 00:20:16.869 } 00:20:16.869 } 00:20:16.869 ]' 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.870 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.129 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:17.129 15:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:17.699 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.958 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.958 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.958 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.958 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.959 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.959 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.959 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.218 15:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.477 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.477 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.737 { 00:20:18.737 "cntlid": 27, 00:20:18.737 "qid": 0, 00:20:18.737 "state": "enabled", 00:20:18.737 "thread": "nvmf_tgt_poll_group_000", 00:20:18.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:18.737 "listen_address": { 00:20:18.737 "trtype": "RDMA", 00:20:18.737 "adrfam": "IPv4", 00:20:18.737 "traddr": "192.168.100.8", 00:20:18.737 "trsvcid": "4420" 00:20:18.737 }, 00:20:18.737 "peer_address": { 00:20:18.737 "trtype": "RDMA", 00:20:18.737 "adrfam": "IPv4", 00:20:18.737 "traddr": "192.168.100.8", 00:20:18.737 "trsvcid": "60587" 00:20:18.737 }, 00:20:18.737 "auth": { 00:20:18.737 "state": "completed", 00:20:18.737 "digest": "sha256", 00:20:18.737 "dhgroup": "ffdhe4096" 00:20:18.737 } 00:20:18.737 } 00:20:18.737 ]' 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.737 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.999 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:18.999 15:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:19.569 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.569 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.570 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.829 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.088 00:20:20.088 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.088 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.088 15:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.348 { 00:20:20.348 "cntlid": 29, 00:20:20.348 "qid": 0, 00:20:20.348 "state": "enabled", 00:20:20.348 "thread": "nvmf_tgt_poll_group_000", 00:20:20.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:20.348 "listen_address": { 00:20:20.348 "trtype": "RDMA", 00:20:20.348 "adrfam": "IPv4", 00:20:20.348 "traddr": "192.168.100.8", 00:20:20.348 "trsvcid": "4420" 00:20:20.348 }, 00:20:20.348 "peer_address": { 00:20:20.348 "trtype": "RDMA", 00:20:20.348 "adrfam": "IPv4", 00:20:20.348 "traddr": "192.168.100.8", 00:20:20.348 "trsvcid": "44444" 00:20:20.348 }, 00:20:20.348 "auth": { 00:20:20.348 "state": "completed", 00:20:20.348 "digest": "sha256", 00:20:20.348 "dhgroup": "ffdhe4096" 00:20:20.348 } 00:20:20.348 } 00:20:20.348 ]' 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.348 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.607 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:20.607 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:21.176 15:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.435 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.694 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.695 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.954 00:20:21.954 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.954 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.954 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.213 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.213 { 00:20:22.213 "cntlid": 31, 00:20:22.213 "qid": 0, 00:20:22.213 "state": "enabled", 00:20:22.213 "thread": "nvmf_tgt_poll_group_000", 00:20:22.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:22.213 "listen_address": { 00:20:22.213 "trtype": "RDMA", 00:20:22.213 "adrfam": "IPv4", 00:20:22.213 "traddr": "192.168.100.8", 00:20:22.213 "trsvcid": "4420" 00:20:22.213 }, 00:20:22.213 "peer_address": { 00:20:22.213 "trtype": "RDMA", 00:20:22.213 "adrfam": "IPv4", 00:20:22.213 "traddr": "192.168.100.8", 00:20:22.213 "trsvcid": "34152" 00:20:22.213 }, 00:20:22.213 "auth": { 00:20:22.213 "state": "completed", 00:20:22.213 "digest": "sha256", 00:20:22.214 "dhgroup": "ffdhe4096" 00:20:22.214 } 00:20:22.214 } 00:20:22.214 ]' 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.214 15:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.473 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:22.473 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:23.040 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.299 15:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.299 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.300 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.300 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.300 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.868 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.868 { 00:20:23.868 "cntlid": 33, 00:20:23.868 "qid": 0, 00:20:23.868 "state": "enabled", 00:20:23.868 "thread": "nvmf_tgt_poll_group_000", 00:20:23.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:23.868 "listen_address": { 00:20:23.868 "trtype": "RDMA", 00:20:23.868 "adrfam": "IPv4", 00:20:23.868 "traddr": "192.168.100.8", 00:20:23.868 "trsvcid": "4420" 00:20:23.868 }, 00:20:23.868 "peer_address": { 00:20:23.868 "trtype": "RDMA", 00:20:23.868 "adrfam": "IPv4", 00:20:23.868 "traddr": "192.168.100.8", 00:20:23.868 "trsvcid": "34515" 00:20:23.868 }, 00:20:23.868 "auth": { 00:20:23.868 "state": "completed", 00:20:23.868 "digest": "sha256", 00:20:23.868 "dhgroup": "ffdhe6144" 00:20:23.868 } 00:20:23.868 } 00:20:23.868 ]' 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.868 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.127 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.127 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.127 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.127 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.127 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.385 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:24.386 15:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.953 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.213 15:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.472 00:20:25.472 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.472 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.472 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.731 { 00:20:25.731 "cntlid": 35, 00:20:25.731 "qid": 0, 00:20:25.731 "state": "enabled", 00:20:25.731 "thread": "nvmf_tgt_poll_group_000", 00:20:25.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:25.731 "listen_address": { 00:20:25.731 "trtype": "RDMA", 00:20:25.731 "adrfam": "IPv4", 00:20:25.731 "traddr": "192.168.100.8", 00:20:25.731 "trsvcid": "4420" 00:20:25.731 }, 00:20:25.731 "peer_address": { 00:20:25.731 "trtype": "RDMA", 00:20:25.731 "adrfam": "IPv4", 00:20:25.731 "traddr": "192.168.100.8", 00:20:25.731 "trsvcid": "37186" 00:20:25.731 }, 00:20:25.731 "auth": { 00:20:25.731 "state": "completed", 00:20:25.731 "digest": "sha256", 00:20:25.731 "dhgroup": "ffdhe6144" 00:20:25.731 } 00:20:25.731 } 00:20:25.731 ]' 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.731 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.990 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:25.991 15:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.928 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.187 15:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.446 00:20:27.446 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.446 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.446 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.705 { 00:20:27.705 "cntlid": 37, 00:20:27.705 "qid": 0, 00:20:27.705 "state": "enabled", 00:20:27.705 "thread": "nvmf_tgt_poll_group_000", 00:20:27.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:27.705 "listen_address": { 00:20:27.705 "trtype": "RDMA", 00:20:27.705 "adrfam": "IPv4", 00:20:27.705 "traddr": "192.168.100.8", 00:20:27.705 "trsvcid": "4420" 00:20:27.705 }, 00:20:27.705 "peer_address": { 00:20:27.705 "trtype": "RDMA", 00:20:27.705 "adrfam": "IPv4", 00:20:27.705 "traddr": "192.168.100.8", 00:20:27.705 "trsvcid": "51022" 00:20:27.705 }, 00:20:27.705 "auth": { 00:20:27.705 "state": "completed", 00:20:27.705 "digest": "sha256", 00:20:27.705 "dhgroup": "ffdhe6144" 00:20:27.705 } 00:20:27.705 } 00:20:27.705 ]' 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.705 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.964 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:27.965 15:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.533 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.792 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.793 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.361 00:20:29.361 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.361 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.361 15:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.361 { 00:20:29.361 "cntlid": 39, 00:20:29.361 "qid": 0, 00:20:29.361 "state": "enabled", 00:20:29.361 "thread": "nvmf_tgt_poll_group_000", 00:20:29.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:29.361 "listen_address": { 00:20:29.361 "trtype": "RDMA", 00:20:29.361 "adrfam": "IPv4", 00:20:29.361 "traddr": "192.168.100.8", 00:20:29.361 "trsvcid": "4420" 00:20:29.361 }, 00:20:29.361 "peer_address": { 00:20:29.361 "trtype": "RDMA", 00:20:29.361 "adrfam": "IPv4", 00:20:29.361 "traddr": "192.168.100.8", 00:20:29.361 "trsvcid": "42379" 00:20:29.361 }, 00:20:29.361 "auth": { 00:20:29.361 "state": "completed", 00:20:29.361 "digest": "sha256", 00:20:29.361 "dhgroup": "ffdhe6144" 00:20:29.361 } 00:20:29.361 } 00:20:29.361 ]' 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.361 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:29.620 15:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.558 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.155 00:20:31.155 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.155 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.155 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.414 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.414 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.414 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.414 15:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.414 { 00:20:31.414 "cntlid": 41, 00:20:31.414 "qid": 0, 00:20:31.414 "state": "enabled", 00:20:31.414 "thread": "nvmf_tgt_poll_group_000", 00:20:31.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:31.414 "listen_address": { 00:20:31.414 "trtype": "RDMA", 00:20:31.414 "adrfam": "IPv4", 00:20:31.414 "traddr": "192.168.100.8", 00:20:31.414 "trsvcid": "4420" 00:20:31.414 }, 00:20:31.414 "peer_address": { 00:20:31.414 "trtype": "RDMA", 00:20:31.414 "adrfam": "IPv4", 00:20:31.414 "traddr": "192.168.100.8", 00:20:31.414 "trsvcid": "33256" 00:20:31.414 }, 00:20:31.414 "auth": { 00:20:31.414 "state": "completed", 00:20:31.414 "digest": "sha256", 00:20:31.414 "dhgroup": "ffdhe8192" 00:20:31.414 } 00:20:31.414 } 00:20:31.414 ]' 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.414 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.415 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.673 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:31.673 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:32.240 15:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.499 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.067 00:20:33.067 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.067 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.067 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.326 15:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.326 { 00:20:33.326 "cntlid": 43, 00:20:33.326 "qid": 0, 00:20:33.326 "state": "enabled", 00:20:33.326 "thread": "nvmf_tgt_poll_group_000", 00:20:33.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:33.326 "listen_address": { 00:20:33.326 "trtype": "RDMA", 00:20:33.326 "adrfam": "IPv4", 00:20:33.326 "traddr": "192.168.100.8", 00:20:33.326 "trsvcid": "4420" 00:20:33.326 }, 00:20:33.326 "peer_address": { 00:20:33.326 "trtype": "RDMA", 00:20:33.326 "adrfam": "IPv4", 00:20:33.326 "traddr": "192.168.100.8", 00:20:33.326 "trsvcid": "57237" 00:20:33.326 }, 00:20:33.326 "auth": { 00:20:33.326 "state": "completed", 00:20:33.326 "digest": "sha256", 00:20:33.326 "dhgroup": "ffdhe8192" 00:20:33.326 } 00:20:33.326 } 00:20:33.326 ]' 00:20:33.326 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.326 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.326 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.326 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.326 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.585 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.585 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.585 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.585 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:33.585 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:34.521 15:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.521 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.522 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.522 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.522 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.522 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.089 00:20:35.089 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.089 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.089 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.348 { 00:20:35.348 "cntlid": 45, 00:20:35.348 "qid": 0, 00:20:35.348 "state": "enabled", 00:20:35.348 "thread": "nvmf_tgt_poll_group_000", 00:20:35.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:35.348 "listen_address": { 00:20:35.348 "trtype": "RDMA", 00:20:35.348 "adrfam": "IPv4", 00:20:35.348 "traddr": "192.168.100.8", 00:20:35.348 "trsvcid": "4420" 00:20:35.348 }, 00:20:35.348 "peer_address": { 00:20:35.348 "trtype": "RDMA", 00:20:35.348 "adrfam": "IPv4", 00:20:35.348 "traddr": "192.168.100.8", 00:20:35.348 "trsvcid": "58367" 00:20:35.348 }, 00:20:35.348 "auth": { 00:20:35.348 "state": "completed", 00:20:35.348 "digest": "sha256", 00:20:35.348 "dhgroup": "ffdhe8192" 00:20:35.348 } 00:20:35.348 } 00:20:35.348 ]' 00:20:35.348 15:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.348 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.606 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:35.607 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:36.173 15:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.433 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.692 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.951 00:20:36.951 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.951 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.951 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.210 { 00:20:37.210 "cntlid": 47, 00:20:37.210 "qid": 0, 00:20:37.210 "state": "enabled", 00:20:37.210 "thread": "nvmf_tgt_poll_group_000", 00:20:37.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:37.210 "listen_address": { 00:20:37.210 "trtype": "RDMA", 00:20:37.210 "adrfam": "IPv4", 00:20:37.210 "traddr": "192.168.100.8", 00:20:37.210 "trsvcid": "4420" 00:20:37.210 }, 00:20:37.210 "peer_address": { 00:20:37.210 "trtype": "RDMA", 00:20:37.210 "adrfam": "IPv4", 00:20:37.210 "traddr": "192.168.100.8", 00:20:37.210 "trsvcid": "37571" 00:20:37.210 }, 00:20:37.210 "auth": { 00:20:37.210 "state": "completed", 00:20:37.210 "digest": "sha256", 00:20:37.210 "dhgroup": "ffdhe8192" 00:20:37.210 } 00:20:37.210 } 00:20:37.210 ]' 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.210 15:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.469 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.469 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.469 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.469 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:37.469 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:38.405 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.406 15:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.406 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.665 00:20:38.665 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.665 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.665 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.925 { 00:20:38.925 "cntlid": 49, 00:20:38.925 "qid": 0, 00:20:38.925 "state": "enabled", 00:20:38.925 "thread": "nvmf_tgt_poll_group_000", 00:20:38.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:38.925 "listen_address": { 00:20:38.925 "trtype": "RDMA", 00:20:38.925 "adrfam": "IPv4", 00:20:38.925 "traddr": "192.168.100.8", 00:20:38.925 "trsvcid": "4420" 00:20:38.925 }, 00:20:38.925 "peer_address": { 00:20:38.925 "trtype": "RDMA", 00:20:38.925 "adrfam": "IPv4", 00:20:38.925 "traddr": "192.168.100.8", 00:20:38.925 "trsvcid": "34319" 00:20:38.925 }, 00:20:38.925 "auth": { 00:20:38.925 "state": "completed", 00:20:38.925 "digest": "sha384", 00:20:38.925 "dhgroup": "null" 00:20:38.925 } 00:20:38.925 } 00:20:38.925 ]' 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:38.925 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.184 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.184 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.184 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.184 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:39.184 15:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:39.751 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.011 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.270 15:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.530 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.530 { 00:20:40.530 "cntlid": 51, 00:20:40.530 "qid": 0, 00:20:40.530 "state": "enabled", 00:20:40.530 "thread": "nvmf_tgt_poll_group_000", 00:20:40.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:40.530 "listen_address": { 00:20:40.530 "trtype": "RDMA", 00:20:40.530 "adrfam": "IPv4", 00:20:40.530 "traddr": "192.168.100.8", 00:20:40.530 "trsvcid": "4420" 00:20:40.530 }, 00:20:40.530 "peer_address": { 00:20:40.530 "trtype": "RDMA", 00:20:40.530 "adrfam": "IPv4", 00:20:40.530 "traddr": "192.168.100.8", 00:20:40.530 "trsvcid": "41158" 00:20:40.530 }, 00:20:40.530 "auth": { 00:20:40.530 "state": "completed", 00:20:40.530 "digest": "sha384", 00:20:40.530 "dhgroup": "null" 00:20:40.530 } 00:20:40.530 } 00:20:40.530 ]' 00:20:40.530 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.789 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.048 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:41.048 15:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.616 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.876 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.134 00:20:42.134 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.134 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.134 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.394 { 00:20:42.394 "cntlid": 53, 00:20:42.394 "qid": 0, 00:20:42.394 "state": "enabled", 00:20:42.394 "thread": "nvmf_tgt_poll_group_000", 00:20:42.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:42.394 "listen_address": { 00:20:42.394 "trtype": "RDMA", 00:20:42.394 "adrfam": "IPv4", 00:20:42.394 "traddr": "192.168.100.8", 00:20:42.394 "trsvcid": "4420" 00:20:42.394 }, 00:20:42.394 "peer_address": { 00:20:42.394 "trtype": "RDMA", 00:20:42.394 "adrfam": "IPv4", 00:20:42.394 "traddr": "192.168.100.8", 00:20:42.394 "trsvcid": "56494" 00:20:42.394 }, 00:20:42.394 "auth": { 00:20:42.394 "state": "completed", 00:20:42.394 "digest": "sha384", 00:20:42.394 "dhgroup": "null" 00:20:42.394 } 00:20:42.394 } 00:20:42.394 ]' 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.394 15:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.394 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.394 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.394 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.394 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.394 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.653 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:42.653 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:43.221 15:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.480 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.481 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.481 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.481 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.481 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.740 00:20:43.740 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.740 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.740 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.999 { 00:20:43.999 "cntlid": 55, 00:20:43.999 "qid": 0, 00:20:43.999 "state": "enabled", 00:20:43.999 "thread": "nvmf_tgt_poll_group_000", 00:20:43.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:43.999 "listen_address": { 00:20:43.999 "trtype": "RDMA", 00:20:43.999 "adrfam": "IPv4", 00:20:43.999 "traddr": "192.168.100.8", 00:20:43.999 "trsvcid": "4420" 00:20:43.999 }, 00:20:43.999 "peer_address": { 00:20:43.999 "trtype": "RDMA", 00:20:43.999 "adrfam": "IPv4", 00:20:43.999 "traddr": "192.168.100.8", 00:20:43.999 "trsvcid": "43973" 00:20:43.999 }, 00:20:43.999 "auth": { 00:20:43.999 "state": "completed", 00:20:43.999 "digest": "sha384", 00:20:43.999 "dhgroup": "null" 00:20:43.999 } 00:20:43.999 } 00:20:43.999 ]' 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.999 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.258 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.258 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.258 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.258 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:44.258 15:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:44.825 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.083 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.342 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:45.342 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.342 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.342 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.343 15:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.602 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.602 { 00:20:45.602 "cntlid": 57, 00:20:45.602 "qid": 0, 00:20:45.602 "state": "enabled", 00:20:45.602 "thread": "nvmf_tgt_poll_group_000", 00:20:45.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:45.602 "listen_address": { 00:20:45.602 "trtype": "RDMA", 00:20:45.602 "adrfam": "IPv4", 00:20:45.602 "traddr": "192.168.100.8", 00:20:45.602 "trsvcid": "4420" 00:20:45.602 }, 00:20:45.602 "peer_address": { 00:20:45.602 "trtype": "RDMA", 00:20:45.602 "adrfam": "IPv4", 00:20:45.602 "traddr": "192.168.100.8", 00:20:45.602 "trsvcid": "37779" 00:20:45.602 }, 00:20:45.602 "auth": { 00:20:45.602 "state": "completed", 00:20:45.602 "digest": "sha384", 00:20:45.602 "dhgroup": "ffdhe2048" 00:20:45.602 } 00:20:45.602 } 00:20:45.602 ]' 00:20:45.602 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.861 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.120 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:46.120 15:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.688 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.947 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.948 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.207 00:20:47.207 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.207 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.207 15:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.466 { 00:20:47.466 "cntlid": 59, 00:20:47.466 "qid": 0, 00:20:47.466 "state": "enabled", 00:20:47.466 "thread": "nvmf_tgt_poll_group_000", 00:20:47.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:47.466 "listen_address": { 00:20:47.466 "trtype": "RDMA", 00:20:47.466 "adrfam": "IPv4", 00:20:47.466 "traddr": "192.168.100.8", 00:20:47.466 "trsvcid": "4420" 00:20:47.466 }, 00:20:47.466 "peer_address": { 00:20:47.466 "trtype": "RDMA", 00:20:47.466 "adrfam": "IPv4", 00:20:47.466 "traddr": "192.168.100.8", 00:20:47.466 "trsvcid": "48341" 00:20:47.466 }, 00:20:47.466 "auth": { 00:20:47.466 "state": "completed", 00:20:47.466 "digest": "sha384", 00:20:47.466 "dhgroup": "ffdhe2048" 00:20:47.466 } 00:20:47.466 } 00:20:47.466 ]' 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.466 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.725 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:47.725 15:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:48.293 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.552 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.811 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.070 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.070 { 00:20:49.070 "cntlid": 61, 00:20:49.070 "qid": 0, 00:20:49.070 "state": "enabled", 00:20:49.070 "thread": "nvmf_tgt_poll_group_000", 00:20:49.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:49.070 "listen_address": { 00:20:49.070 "trtype": "RDMA", 00:20:49.070 "adrfam": "IPv4", 00:20:49.070 "traddr": "192.168.100.8", 00:20:49.070 "trsvcid": "4420" 00:20:49.070 }, 00:20:49.070 "peer_address": { 00:20:49.070 "trtype": "RDMA", 00:20:49.070 "adrfam": "IPv4", 00:20:49.070 "traddr": "192.168.100.8", 00:20:49.070 "trsvcid": "55881" 00:20:49.070 }, 00:20:49.070 "auth": { 00:20:49.070 "state": "completed", 00:20:49.070 "digest": "sha384", 00:20:49.070 "dhgroup": "ffdhe2048" 00:20:49.070 } 00:20:49.070 } 00:20:49.070 ]' 00:20:49.070 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.329 15:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.588 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:49.588 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.155 15:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.415 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.674 00:20:50.674 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.674 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.674 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.932 { 00:20:50.932 "cntlid": 63, 00:20:50.932 "qid": 0, 00:20:50.932 "state": "enabled", 00:20:50.932 "thread": "nvmf_tgt_poll_group_000", 00:20:50.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:50.932 "listen_address": { 00:20:50.932 "trtype": "RDMA", 00:20:50.932 "adrfam": "IPv4", 00:20:50.932 "traddr": "192.168.100.8", 00:20:50.932 "trsvcid": "4420" 00:20:50.932 }, 00:20:50.932 "peer_address": { 00:20:50.932 "trtype": "RDMA", 00:20:50.932 "adrfam": "IPv4", 00:20:50.932 "traddr": "192.168.100.8", 00:20:50.932 "trsvcid": "33280" 00:20:50.932 }, 00:20:50.932 "auth": { 00:20:50.932 "state": "completed", 00:20:50.932 "digest": "sha384", 00:20:50.932 "dhgroup": "ffdhe2048" 00:20:50.932 } 00:20:50.932 } 00:20:50.932 ]' 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.932 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.192 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:51.192 15:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:51.760 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.760 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:51.760 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.029 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.030 15:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.291 00:20:52.291 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.291 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.292 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.551 { 00:20:52.551 "cntlid": 65, 00:20:52.551 "qid": 0, 00:20:52.551 "state": "enabled", 00:20:52.551 "thread": "nvmf_tgt_poll_group_000", 00:20:52.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:52.551 "listen_address": { 00:20:52.551 "trtype": "RDMA", 00:20:52.551 "adrfam": "IPv4", 00:20:52.551 "traddr": "192.168.100.8", 00:20:52.551 "trsvcid": "4420" 00:20:52.551 }, 00:20:52.551 "peer_address": { 00:20:52.551 "trtype": "RDMA", 00:20:52.551 "adrfam": "IPv4", 00:20:52.551 "traddr": "192.168.100.8", 00:20:52.551 "trsvcid": "35172" 00:20:52.551 }, 00:20:52.551 "auth": { 00:20:52.551 "state": "completed", 00:20:52.551 "digest": "sha384", 00:20:52.551 "dhgroup": "ffdhe3072" 00:20:52.551 } 00:20:52.551 } 00:20:52.551 ]' 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.551 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.810 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:52.810 15:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:53.378 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.637 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.897 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.156 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.156 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.415 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.415 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.415 { 00:20:54.415 "cntlid": 67, 00:20:54.415 "qid": 0, 00:20:54.415 "state": "enabled", 00:20:54.415 "thread": "nvmf_tgt_poll_group_000", 00:20:54.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:54.415 "listen_address": { 00:20:54.415 "trtype": "RDMA", 00:20:54.415 "adrfam": "IPv4", 00:20:54.415 "traddr": "192.168.100.8", 00:20:54.415 "trsvcid": "4420" 00:20:54.415 }, 00:20:54.415 "peer_address": { 00:20:54.415 "trtype": "RDMA", 00:20:54.415 "adrfam": "IPv4", 00:20:54.415 "traddr": "192.168.100.8", 00:20:54.415 "trsvcid": "54836" 00:20:54.415 }, 00:20:54.415 "auth": { 00:20:54.415 "state": "completed", 00:20:54.415 "digest": "sha384", 00:20:54.415 "dhgroup": "ffdhe3072" 00:20:54.415 } 00:20:54.415 } 00:20:54.415 ]' 00:20:54.415 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.415 15:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.415 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.416 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.416 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.416 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.416 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.416 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.675 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:54.675 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:20:55.243 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.243 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:55.243 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.243 15:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.243 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.243 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.243 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.243 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.502 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:55.502 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.503 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.762 00:20:55.762 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.762 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.762 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.021 { 00:20:56.021 "cntlid": 69, 00:20:56.021 "qid": 0, 00:20:56.021 "state": "enabled", 00:20:56.021 "thread": "nvmf_tgt_poll_group_000", 00:20:56.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:56.021 "listen_address": { 00:20:56.021 "trtype": "RDMA", 00:20:56.021 "adrfam": "IPv4", 00:20:56.021 "traddr": "192.168.100.8", 00:20:56.021 "trsvcid": "4420" 00:20:56.021 }, 00:20:56.021 "peer_address": { 00:20:56.021 "trtype": "RDMA", 00:20:56.021 "adrfam": "IPv4", 00:20:56.021 "traddr": "192.168.100.8", 00:20:56.021 "trsvcid": "45984" 00:20:56.021 }, 00:20:56.021 "auth": { 00:20:56.021 "state": "completed", 00:20:56.021 "digest": "sha384", 00:20:56.021 "dhgroup": "ffdhe3072" 00:20:56.021 } 00:20:56.021 } 00:20:56.021 ]' 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.021 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.280 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:56.280 15:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:20:56.849 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.108 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.367 15:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.626 00:20:57.626 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.626 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.627 { 00:20:57.627 "cntlid": 71, 00:20:57.627 "qid": 0, 00:20:57.627 "state": "enabled", 00:20:57.627 "thread": "nvmf_tgt_poll_group_000", 00:20:57.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:57.627 "listen_address": { 00:20:57.627 "trtype": "RDMA", 00:20:57.627 "adrfam": "IPv4", 00:20:57.627 "traddr": "192.168.100.8", 00:20:57.627 "trsvcid": "4420" 00:20:57.627 }, 00:20:57.627 "peer_address": { 00:20:57.627 "trtype": "RDMA", 00:20:57.627 "adrfam": "IPv4", 00:20:57.627 "traddr": "192.168.100.8", 00:20:57.627 "trsvcid": "44916" 00:20:57.627 }, 00:20:57.627 "auth": { 00:20:57.627 "state": "completed", 00:20:57.627 "digest": "sha384", 00:20:57.627 "dhgroup": "ffdhe3072" 00:20:57.627 } 00:20:57.627 } 00:20:57.627 ]' 00:20:57.627 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.886 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.145 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:58.145 15:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.712 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.713 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.713 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.713 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.972 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.235 00:20:59.235 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.235 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.235 15:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.552 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.552 { 00:20:59.552 "cntlid": 73, 00:20:59.552 "qid": 0, 00:20:59.552 "state": "enabled", 00:20:59.552 "thread": "nvmf_tgt_poll_group_000", 00:20:59.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:59.552 "listen_address": { 00:20:59.552 "trtype": "RDMA", 00:20:59.552 "adrfam": "IPv4", 00:20:59.552 "traddr": "192.168.100.8", 00:20:59.552 "trsvcid": "4420" 00:20:59.552 }, 00:20:59.552 "peer_address": { 00:20:59.552 "trtype": "RDMA", 00:20:59.552 "adrfam": "IPv4", 00:20:59.552 "traddr": "192.168.100.8", 00:20:59.552 "trsvcid": "48924" 00:20:59.553 }, 00:20:59.553 "auth": { 00:20:59.553 "state": "completed", 00:20:59.553 "digest": "sha384", 00:20:59.553 "dhgroup": "ffdhe4096" 00:20:59.553 } 00:20:59.553 } 00:20:59.553 ]' 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.553 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.835 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:20:59.835 15:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.403 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.662 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.663 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.924 00:21:00.924 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.924 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.924 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.183 { 00:21:01.183 "cntlid": 75, 00:21:01.183 "qid": 0, 00:21:01.183 "state": "enabled", 00:21:01.183 "thread": "nvmf_tgt_poll_group_000", 00:21:01.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:01.183 "listen_address": { 00:21:01.183 "trtype": "RDMA", 00:21:01.183 "adrfam": "IPv4", 00:21:01.183 "traddr": "192.168.100.8", 00:21:01.183 "trsvcid": "4420" 00:21:01.183 }, 00:21:01.183 "peer_address": { 00:21:01.183 "trtype": "RDMA", 00:21:01.183 "adrfam": "IPv4", 00:21:01.183 "traddr": "192.168.100.8", 00:21:01.183 "trsvcid": "43288" 00:21:01.183 }, 00:21:01.183 "auth": { 00:21:01.183 "state": "completed", 00:21:01.183 "digest": "sha384", 00:21:01.183 "dhgroup": "ffdhe4096" 00:21:01.183 } 00:21:01.183 } 00:21:01.183 ]' 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.183 15:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.442 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:01.442 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:02.010 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.269 15:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.527 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.786 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.786 { 00:21:02.786 "cntlid": 77, 00:21:02.786 "qid": 0, 00:21:02.786 "state": "enabled", 00:21:02.786 "thread": "nvmf_tgt_poll_group_000", 00:21:02.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.786 "listen_address": { 00:21:02.786 "trtype": "RDMA", 00:21:02.786 "adrfam": "IPv4", 00:21:02.786 "traddr": "192.168.100.8", 00:21:02.786 "trsvcid": "4420" 00:21:02.786 }, 00:21:02.786 "peer_address": { 00:21:02.786 "trtype": "RDMA", 00:21:02.786 "adrfam": "IPv4", 00:21:02.786 "traddr": "192.168.100.8", 00:21:02.786 "trsvcid": "35598" 00:21:02.786 }, 00:21:02.786 "auth": { 00:21:02.786 "state": "completed", 00:21:02.786 "digest": "sha384", 00:21:02.786 "dhgroup": "ffdhe4096" 00:21:02.786 } 00:21:02.786 } 00:21:02.786 ]' 00:21:02.786 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.045 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.304 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:03.304 15:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.872 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.132 15:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.391 00:21:04.391 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.391 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.391 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.650 { 00:21:04.650 "cntlid": 79, 00:21:04.650 "qid": 0, 00:21:04.650 "state": "enabled", 00:21:04.650 "thread": "nvmf_tgt_poll_group_000", 00:21:04.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:04.650 "listen_address": { 00:21:04.650 "trtype": "RDMA", 00:21:04.650 "adrfam": "IPv4", 00:21:04.650 "traddr": "192.168.100.8", 00:21:04.650 "trsvcid": "4420" 00:21:04.650 }, 00:21:04.650 "peer_address": { 00:21:04.650 "trtype": "RDMA", 00:21:04.650 "adrfam": "IPv4", 00:21:04.650 "traddr": "192.168.100.8", 00:21:04.650 "trsvcid": "48948" 00:21:04.650 }, 00:21:04.650 "auth": { 00:21:04.650 "state": "completed", 00:21:04.650 "digest": "sha384", 00:21:04.650 "dhgroup": "ffdhe4096" 00:21:04.650 } 00:21:04.650 } 00:21:04.650 ]' 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.650 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.651 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.909 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:04.909 15:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:05.476 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.735 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.994 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:05.994 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.994 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.994 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.995 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.253 00:21:06.253 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.253 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.253 15:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.513 { 00:21:06.513 "cntlid": 81, 00:21:06.513 "qid": 0, 00:21:06.513 "state": "enabled", 00:21:06.513 "thread": "nvmf_tgt_poll_group_000", 00:21:06.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:06.513 "listen_address": { 00:21:06.513 "trtype": "RDMA", 00:21:06.513 "adrfam": "IPv4", 00:21:06.513 "traddr": "192.168.100.8", 00:21:06.513 "trsvcid": "4420" 00:21:06.513 }, 00:21:06.513 "peer_address": { 00:21:06.513 "trtype": "RDMA", 00:21:06.513 "adrfam": "IPv4", 00:21:06.513 "traddr": "192.168.100.8", 00:21:06.513 "trsvcid": "42280" 00:21:06.513 }, 00:21:06.513 "auth": { 00:21:06.513 "state": "completed", 00:21:06.513 "digest": "sha384", 00:21:06.513 "dhgroup": "ffdhe6144" 00:21:06.513 } 00:21:06.513 } 00:21:06.513 ]' 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.513 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.772 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:06.772 15:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:07.340 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.600 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.168 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.168 { 00:21:08.168 "cntlid": 83, 00:21:08.168 "qid": 0, 00:21:08.168 "state": "enabled", 00:21:08.168 "thread": "nvmf_tgt_poll_group_000", 00:21:08.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.168 "listen_address": { 00:21:08.168 "trtype": "RDMA", 00:21:08.168 "adrfam": "IPv4", 00:21:08.168 "traddr": "192.168.100.8", 00:21:08.168 "trsvcid": "4420" 00:21:08.168 }, 00:21:08.168 "peer_address": { 00:21:08.168 "trtype": "RDMA", 00:21:08.168 "adrfam": "IPv4", 00:21:08.168 "traddr": "192.168.100.8", 00:21:08.168 "trsvcid": "56956" 00:21:08.168 }, 00:21:08.168 "auth": { 00:21:08.168 "state": "completed", 00:21:08.168 "digest": "sha384", 00:21:08.168 "dhgroup": "ffdhe6144" 00:21:08.168 } 00:21:08.168 } 00:21:08.168 ]' 00:21:08.168 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.427 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.427 15:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.427 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.427 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.427 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.427 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.427 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.686 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:08.686 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.254 15:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.512 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.771 00:21:09.771 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.771 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.771 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.030 { 00:21:10.030 "cntlid": 85, 00:21:10.030 "qid": 0, 00:21:10.030 "state": "enabled", 00:21:10.030 "thread": "nvmf_tgt_poll_group_000", 00:21:10.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:10.030 "listen_address": { 00:21:10.030 "trtype": "RDMA", 00:21:10.030 "adrfam": "IPv4", 00:21:10.030 "traddr": "192.168.100.8", 00:21:10.030 "trsvcid": "4420" 00:21:10.030 }, 00:21:10.030 "peer_address": { 00:21:10.030 "trtype": "RDMA", 00:21:10.030 "adrfam": "IPv4", 00:21:10.030 "traddr": "192.168.100.8", 00:21:10.030 "trsvcid": "52774" 00:21:10.030 }, 00:21:10.030 "auth": { 00:21:10.030 "state": "completed", 00:21:10.030 "digest": "sha384", 00:21:10.030 "dhgroup": "ffdhe6144" 00:21:10.030 } 00:21:10.030 } 00:21:10.030 ]' 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.030 15:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.289 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:10.289 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:10.856 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.114 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.373 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:11.373 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.373 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.374 15:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.633 00:21:11.633 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.633 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.633 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.893 { 00:21:11.893 "cntlid": 87, 00:21:11.893 "qid": 0, 00:21:11.893 "state": "enabled", 00:21:11.893 "thread": "nvmf_tgt_poll_group_000", 00:21:11.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:11.893 "listen_address": { 00:21:11.893 "trtype": "RDMA", 00:21:11.893 "adrfam": "IPv4", 00:21:11.893 "traddr": "192.168.100.8", 00:21:11.893 "trsvcid": "4420" 00:21:11.893 }, 00:21:11.893 "peer_address": { 00:21:11.893 "trtype": "RDMA", 00:21:11.893 "adrfam": "IPv4", 00:21:11.893 "traddr": "192.168.100.8", 00:21:11.893 "trsvcid": "39262" 00:21:11.893 }, 00:21:11.893 "auth": { 00:21:11.893 "state": "completed", 00:21:11.893 "digest": "sha384", 00:21:11.893 "dhgroup": "ffdhe6144" 00:21:11.893 } 00:21:11.893 } 00:21:11.893 ]' 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.893 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.152 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:12.152 15:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:12.720 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.979 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.238 15:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.496 00:21:13.496 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.496 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.496 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.754 { 00:21:13.754 "cntlid": 89, 00:21:13.754 "qid": 0, 00:21:13.754 "state": "enabled", 00:21:13.754 "thread": "nvmf_tgt_poll_group_000", 00:21:13.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:13.754 "listen_address": { 00:21:13.754 "trtype": "RDMA", 00:21:13.754 "adrfam": "IPv4", 00:21:13.754 "traddr": "192.168.100.8", 00:21:13.754 "trsvcid": "4420" 00:21:13.754 }, 00:21:13.754 "peer_address": { 00:21:13.754 "trtype": "RDMA", 00:21:13.754 "adrfam": "IPv4", 00:21:13.754 "traddr": "192.168.100.8", 00:21:13.754 "trsvcid": "52205" 00:21:13.754 }, 00:21:13.754 "auth": { 00:21:13.754 "state": "completed", 00:21:13.754 "digest": "sha384", 00:21:13.754 "dhgroup": "ffdhe8192" 00:21:13.754 } 00:21:13.754 } 00:21:13.754 ]' 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.754 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.012 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.012 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.012 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.012 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.012 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.271 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:14.271 15:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.838 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.097 15:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.664 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.664 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.664 { 00:21:15.664 "cntlid": 91, 00:21:15.664 "qid": 0, 00:21:15.664 "state": "enabled", 00:21:15.664 "thread": "nvmf_tgt_poll_group_000", 00:21:15.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:15.664 "listen_address": { 00:21:15.664 "trtype": "RDMA", 00:21:15.664 "adrfam": "IPv4", 00:21:15.664 "traddr": "192.168.100.8", 00:21:15.664 "trsvcid": "4420" 00:21:15.664 }, 00:21:15.664 "peer_address": { 00:21:15.665 "trtype": "RDMA", 00:21:15.665 "adrfam": "IPv4", 00:21:15.665 "traddr": "192.168.100.8", 00:21:15.665 "trsvcid": "44571" 00:21:15.665 }, 00:21:15.665 "auth": { 00:21:15.665 "state": "completed", 00:21:15.665 "digest": "sha384", 00:21:15.665 "dhgroup": "ffdhe8192" 00:21:15.665 } 00:21:15.665 } 00:21:15.665 ]' 00:21:15.665 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.665 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.665 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:15.922 15:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.858 15:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.426 00:21:17.426 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.426 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.426 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.685 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.685 { 00:21:17.686 "cntlid": 93, 00:21:17.686 "qid": 0, 00:21:17.686 "state": "enabled", 00:21:17.686 "thread": "nvmf_tgt_poll_group_000", 00:21:17.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:17.686 "listen_address": { 00:21:17.686 "trtype": "RDMA", 00:21:17.686 "adrfam": "IPv4", 00:21:17.686 "traddr": "192.168.100.8", 00:21:17.686 "trsvcid": "4420" 00:21:17.686 }, 00:21:17.686 "peer_address": { 00:21:17.686 "trtype": "RDMA", 00:21:17.686 "adrfam": "IPv4", 00:21:17.686 "traddr": "192.168.100.8", 00:21:17.686 "trsvcid": "38224" 00:21:17.686 }, 00:21:17.686 "auth": { 00:21:17.686 "state": "completed", 00:21:17.686 "digest": "sha384", 00:21:17.686 "dhgroup": "ffdhe8192" 00:21:17.686 } 00:21:17.686 } 00:21:17.686 ]' 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.686 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.945 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:17.945 15:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:18.513 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.772 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.031 15:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.290 00:21:19.290 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.290 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.290 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.549 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.550 { 00:21:19.550 "cntlid": 95, 00:21:19.550 "qid": 0, 00:21:19.550 "state": "enabled", 00:21:19.550 "thread": "nvmf_tgt_poll_group_000", 00:21:19.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:19.550 "listen_address": { 00:21:19.550 "trtype": "RDMA", 00:21:19.550 "adrfam": "IPv4", 00:21:19.550 "traddr": "192.168.100.8", 00:21:19.550 "trsvcid": "4420" 00:21:19.550 }, 00:21:19.550 "peer_address": { 00:21:19.550 "trtype": "RDMA", 00:21:19.550 "adrfam": "IPv4", 00:21:19.550 "traddr": "192.168.100.8", 00:21:19.550 "trsvcid": "48981" 00:21:19.550 }, 00:21:19.550 "auth": { 00:21:19.550 "state": "completed", 00:21:19.550 "digest": "sha384", 00:21:19.550 "dhgroup": "ffdhe8192" 00:21:19.550 } 00:21:19.550 } 00:21:19.550 ]' 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.550 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.809 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.809 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.809 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.809 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:19.809 15:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:20.377 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.636 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.895 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.155 00:21:21.155 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.155 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.155 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.414 { 00:21:21.414 "cntlid": 97, 00:21:21.414 "qid": 0, 00:21:21.414 "state": "enabled", 00:21:21.414 "thread": "nvmf_tgt_poll_group_000", 00:21:21.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:21.414 "listen_address": { 00:21:21.414 "trtype": "RDMA", 00:21:21.414 "adrfam": "IPv4", 00:21:21.414 "traddr": "192.168.100.8", 00:21:21.414 "trsvcid": "4420" 00:21:21.414 }, 00:21:21.414 "peer_address": { 00:21:21.414 "trtype": "RDMA", 00:21:21.414 "adrfam": "IPv4", 00:21:21.414 "traddr": "192.168.100.8", 00:21:21.414 "trsvcid": "45673" 00:21:21.414 }, 00:21:21.414 "auth": { 00:21:21.414 "state": "completed", 00:21:21.414 "digest": "sha512", 00:21:21.414 "dhgroup": "null" 00:21:21.414 } 00:21:21.414 } 00:21:21.414 ]' 00:21:21.414 15:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.414 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.673 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:21.673 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.241 15:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.500 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.759 00:21:22.759 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.759 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.759 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.018 { 00:21:23.018 "cntlid": 99, 00:21:23.018 "qid": 0, 00:21:23.018 "state": "enabled", 00:21:23.018 "thread": "nvmf_tgt_poll_group_000", 00:21:23.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:23.018 "listen_address": { 00:21:23.018 "trtype": "RDMA", 00:21:23.018 "adrfam": "IPv4", 00:21:23.018 "traddr": "192.168.100.8", 00:21:23.018 "trsvcid": "4420" 00:21:23.018 }, 00:21:23.018 "peer_address": { 00:21:23.018 "trtype": "RDMA", 00:21:23.018 "adrfam": "IPv4", 00:21:23.018 "traddr": "192.168.100.8", 00:21:23.018 "trsvcid": "46620" 00:21:23.018 }, 00:21:23.018 "auth": { 00:21:23.018 "state": "completed", 00:21:23.018 "digest": "sha512", 00:21:23.018 "dhgroup": "null" 00:21:23.018 } 00:21:23.018 } 00:21:23.018 ]' 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.018 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.019 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.019 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.019 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.019 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.278 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:23.278 15:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:23.844 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.103 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.362 15:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.362 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.620 { 00:21:24.620 "cntlid": 101, 00:21:24.620 "qid": 0, 00:21:24.620 "state": "enabled", 00:21:24.620 "thread": "nvmf_tgt_poll_group_000", 00:21:24.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:24.620 "listen_address": { 00:21:24.620 "trtype": "RDMA", 00:21:24.620 "adrfam": "IPv4", 00:21:24.620 "traddr": "192.168.100.8", 00:21:24.620 "trsvcid": "4420" 00:21:24.620 }, 00:21:24.620 "peer_address": { 00:21:24.620 "trtype": "RDMA", 00:21:24.620 "adrfam": "IPv4", 00:21:24.620 "traddr": "192.168.100.8", 00:21:24.620 "trsvcid": "53907" 00:21:24.620 }, 00:21:24.620 "auth": { 00:21:24.620 "state": "completed", 00:21:24.620 "digest": "sha512", 00:21:24.620 "dhgroup": "null" 00:21:24.620 } 00:21:24.620 } 00:21:24.620 ]' 00:21:24.620 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.879 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.138 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:25.138 15:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.705 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.963 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.964 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.222 00:21:26.222 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.222 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.222 15:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.482 { 00:21:26.482 "cntlid": 103, 00:21:26.482 "qid": 0, 00:21:26.482 "state": "enabled", 00:21:26.482 "thread": "nvmf_tgt_poll_group_000", 00:21:26.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:26.482 "listen_address": { 00:21:26.482 "trtype": "RDMA", 00:21:26.482 "adrfam": "IPv4", 00:21:26.482 "traddr": "192.168.100.8", 00:21:26.482 "trsvcid": "4420" 00:21:26.482 }, 00:21:26.482 "peer_address": { 00:21:26.482 "trtype": "RDMA", 00:21:26.482 "adrfam": "IPv4", 00:21:26.482 "traddr": "192.168.100.8", 00:21:26.482 "trsvcid": "36598" 00:21:26.482 }, 00:21:26.482 "auth": { 00:21:26.482 "state": "completed", 00:21:26.482 "digest": "sha512", 00:21:26.482 "dhgroup": "null" 00:21:26.482 } 00:21:26.482 } 00:21:26.482 ]' 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.482 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.741 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:26.741 15:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:27.309 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.570 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.571 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.571 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.571 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.571 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.852 00:21:27.852 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.852 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.852 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.146 { 00:21:28.146 "cntlid": 105, 00:21:28.146 "qid": 0, 00:21:28.146 "state": "enabled", 00:21:28.146 "thread": "nvmf_tgt_poll_group_000", 00:21:28.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:28.146 "listen_address": { 00:21:28.146 "trtype": "RDMA", 00:21:28.146 "adrfam": "IPv4", 00:21:28.146 "traddr": "192.168.100.8", 00:21:28.146 "trsvcid": "4420" 00:21:28.146 }, 00:21:28.146 "peer_address": { 00:21:28.146 "trtype": "RDMA", 00:21:28.146 "adrfam": "IPv4", 00:21:28.146 "traddr": "192.168.100.8", 00:21:28.146 "trsvcid": "41122" 00:21:28.146 }, 00:21:28.146 "auth": { 00:21:28.146 "state": "completed", 00:21:28.146 "digest": "sha512", 00:21:28.146 "dhgroup": "ffdhe2048" 00:21:28.146 } 00:21:28.146 } 00:21:28.146 ]' 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.146 15:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.405 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:28.405 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:28.973 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.232 15:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.491 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.751 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.751 { 00:21:29.751 "cntlid": 107, 00:21:29.751 "qid": 0, 00:21:29.751 "state": "enabled", 00:21:29.751 "thread": "nvmf_tgt_poll_group_000", 00:21:29.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:29.751 "listen_address": { 00:21:29.751 "trtype": "RDMA", 00:21:29.751 "adrfam": "IPv4", 00:21:29.751 "traddr": "192.168.100.8", 00:21:29.751 "trsvcid": "4420" 00:21:29.751 }, 00:21:29.751 "peer_address": { 00:21:29.751 "trtype": "RDMA", 00:21:29.751 "adrfam": "IPv4", 00:21:29.751 "traddr": "192.168.100.8", 00:21:29.751 "trsvcid": "56776" 00:21:29.751 }, 00:21:29.751 "auth": { 00:21:29.751 "state": "completed", 00:21:29.751 "digest": "sha512", 00:21:29.751 "dhgroup": "ffdhe2048" 00:21:29.751 } 00:21:29.751 } 00:21:29.751 ]' 00:21:29.751 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.010 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.269 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:30.269 15:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.837 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.097 15:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.356 00:21:31.356 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.356 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.356 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.615 { 00:21:31.615 "cntlid": 109, 00:21:31.615 "qid": 0, 00:21:31.615 "state": "enabled", 00:21:31.615 "thread": "nvmf_tgt_poll_group_000", 00:21:31.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:31.615 "listen_address": { 00:21:31.615 "trtype": "RDMA", 00:21:31.615 "adrfam": "IPv4", 00:21:31.615 "traddr": "192.168.100.8", 00:21:31.615 "trsvcid": "4420" 00:21:31.615 }, 00:21:31.615 "peer_address": { 00:21:31.615 "trtype": "RDMA", 00:21:31.615 "adrfam": "IPv4", 00:21:31.615 "traddr": "192.168.100.8", 00:21:31.615 "trsvcid": "33857" 00:21:31.615 }, 00:21:31.615 "auth": { 00:21:31.615 "state": "completed", 00:21:31.615 "digest": "sha512", 00:21:31.615 "dhgroup": "ffdhe2048" 00:21:31.615 } 00:21:31.615 } 00:21:31.615 ]' 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.615 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.874 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:31.874 15:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:32.441 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.700 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.959 00:21:32.959 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.959 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.959 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.218 { 00:21:33.218 "cntlid": 111, 00:21:33.218 "qid": 0, 00:21:33.218 "state": "enabled", 00:21:33.218 "thread": "nvmf_tgt_poll_group_000", 00:21:33.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:33.218 "listen_address": { 00:21:33.218 "trtype": "RDMA", 00:21:33.218 "adrfam": "IPv4", 00:21:33.218 "traddr": "192.168.100.8", 00:21:33.218 "trsvcid": "4420" 00:21:33.218 }, 00:21:33.218 "peer_address": { 00:21:33.218 "trtype": "RDMA", 00:21:33.218 "adrfam": "IPv4", 00:21:33.218 "traddr": "192.168.100.8", 00:21:33.218 "trsvcid": "35442" 00:21:33.218 }, 00:21:33.218 "auth": { 00:21:33.218 "state": "completed", 00:21:33.218 "digest": "sha512", 00:21:33.218 "dhgroup": "ffdhe2048" 00:21:33.218 } 00:21:33.218 } 00:21:33.218 ]' 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.218 15:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.477 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:33.477 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:34.044 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.303 15:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.562 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.821 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.821 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.080 { 00:21:35.080 "cntlid": 113, 00:21:35.080 "qid": 0, 00:21:35.080 "state": "enabled", 00:21:35.080 "thread": "nvmf_tgt_poll_group_000", 00:21:35.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:35.080 "listen_address": { 00:21:35.080 "trtype": "RDMA", 00:21:35.080 "adrfam": "IPv4", 00:21:35.080 "traddr": "192.168.100.8", 00:21:35.080 "trsvcid": "4420" 00:21:35.080 }, 00:21:35.080 "peer_address": { 00:21:35.080 "trtype": "RDMA", 00:21:35.080 "adrfam": "IPv4", 00:21:35.080 "traddr": "192.168.100.8", 00:21:35.080 "trsvcid": "59802" 00:21:35.080 }, 00:21:35.080 "auth": { 00:21:35.080 "state": "completed", 00:21:35.080 "digest": "sha512", 00:21:35.080 "dhgroup": "ffdhe3072" 00:21:35.080 } 00:21:35.080 } 00:21:35.080 ]' 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.080 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.339 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:35.339 15:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.906 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.165 15:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.424 00:21:36.424 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.424 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.424 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.683 { 00:21:36.683 "cntlid": 115, 00:21:36.683 "qid": 0, 00:21:36.683 "state": "enabled", 00:21:36.683 "thread": "nvmf_tgt_poll_group_000", 00:21:36.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:36.683 "listen_address": { 00:21:36.683 "trtype": "RDMA", 00:21:36.683 "adrfam": "IPv4", 00:21:36.683 "traddr": "192.168.100.8", 00:21:36.683 "trsvcid": "4420" 00:21:36.683 }, 00:21:36.683 "peer_address": { 00:21:36.683 "trtype": "RDMA", 00:21:36.683 "adrfam": "IPv4", 00:21:36.683 "traddr": "192.168.100.8", 00:21:36.683 "trsvcid": "49497" 00:21:36.683 }, 00:21:36.683 "auth": { 00:21:36.683 "state": "completed", 00:21:36.683 "digest": "sha512", 00:21:36.683 "dhgroup": "ffdhe3072" 00:21:36.683 } 00:21:36.683 } 00:21:36.683 ]' 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.683 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.942 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:36.942 15:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:37.509 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.768 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.769 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.028 00:21:38.286 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.286 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.286 15:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.287 { 00:21:38.287 "cntlid": 117, 00:21:38.287 "qid": 0, 00:21:38.287 "state": "enabled", 00:21:38.287 "thread": "nvmf_tgt_poll_group_000", 00:21:38.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:38.287 "listen_address": { 00:21:38.287 "trtype": "RDMA", 00:21:38.287 "adrfam": "IPv4", 00:21:38.287 "traddr": "192.168.100.8", 00:21:38.287 "trsvcid": "4420" 00:21:38.287 }, 00:21:38.287 "peer_address": { 00:21:38.287 "trtype": "RDMA", 00:21:38.287 "adrfam": "IPv4", 00:21:38.287 "traddr": "192.168.100.8", 00:21:38.287 "trsvcid": "43373" 00:21:38.287 }, 00:21:38.287 "auth": { 00:21:38.287 "state": "completed", 00:21:38.287 "digest": "sha512", 00:21:38.287 "dhgroup": "ffdhe3072" 00:21:38.287 } 00:21:38.287 } 00:21:38.287 ]' 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.287 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.545 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.545 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.545 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.545 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.545 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.803 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:38.803 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:39.369 15:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.369 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.628 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.888 00:21:39.888 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.888 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.888 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.147 { 00:21:40.147 "cntlid": 119, 00:21:40.147 "qid": 0, 00:21:40.147 "state": "enabled", 00:21:40.147 "thread": "nvmf_tgt_poll_group_000", 00:21:40.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.147 "listen_address": { 00:21:40.147 "trtype": "RDMA", 00:21:40.147 "adrfam": "IPv4", 00:21:40.147 "traddr": "192.168.100.8", 00:21:40.147 "trsvcid": "4420" 00:21:40.147 }, 00:21:40.147 "peer_address": { 00:21:40.147 "trtype": "RDMA", 00:21:40.147 "adrfam": "IPv4", 00:21:40.147 "traddr": "192.168.100.8", 00:21:40.147 "trsvcid": "41249" 00:21:40.147 }, 00:21:40.147 "auth": { 00:21:40.147 "state": "completed", 00:21:40.147 "digest": "sha512", 00:21:40.147 "dhgroup": "ffdhe3072" 00:21:40.147 } 00:21:40.147 } 00:21:40.147 ]' 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.147 15:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.407 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:40.407 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:40.975 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.234 15:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.234 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.494 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.494 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.494 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.494 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.753 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.753 { 00:21:41.753 "cntlid": 121, 00:21:41.753 "qid": 0, 00:21:41.753 "state": "enabled", 00:21:41.753 "thread": "nvmf_tgt_poll_group_000", 00:21:41.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:41.753 "listen_address": { 00:21:41.753 "trtype": "RDMA", 00:21:41.753 "adrfam": "IPv4", 00:21:41.753 "traddr": "192.168.100.8", 00:21:41.753 "trsvcid": "4420" 00:21:41.753 }, 00:21:41.753 "peer_address": { 00:21:41.753 "trtype": "RDMA", 00:21:41.753 "adrfam": "IPv4", 00:21:41.753 "traddr": "192.168.100.8", 00:21:41.753 "trsvcid": "34194" 00:21:41.753 }, 00:21:41.753 "auth": { 00:21:41.753 "state": "completed", 00:21:41.753 "digest": "sha512", 00:21:41.753 "dhgroup": "ffdhe4096" 00:21:41.753 } 00:21:41.753 } 00:21:41.753 ]' 00:21:41.753 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.013 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.273 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:42.273 15:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.841 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.099 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.100 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.100 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.100 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.100 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.100 15:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.358 00:21:43.358 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.358 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.358 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.617 { 00:21:43.617 "cntlid": 123, 00:21:43.617 "qid": 0, 00:21:43.617 "state": "enabled", 00:21:43.617 "thread": "nvmf_tgt_poll_group_000", 00:21:43.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:43.617 "listen_address": { 00:21:43.617 "trtype": "RDMA", 00:21:43.617 "adrfam": "IPv4", 00:21:43.617 "traddr": "192.168.100.8", 00:21:43.617 "trsvcid": "4420" 00:21:43.617 }, 00:21:43.617 "peer_address": { 00:21:43.617 "trtype": "RDMA", 00:21:43.617 "adrfam": "IPv4", 00:21:43.617 "traddr": "192.168.100.8", 00:21:43.617 "trsvcid": "51784" 00:21:43.617 }, 00:21:43.617 "auth": { 00:21:43.617 "state": "completed", 00:21:43.617 "digest": "sha512", 00:21:43.617 "dhgroup": "ffdhe4096" 00:21:43.617 } 00:21:43.617 } 00:21:43.617 ]' 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.617 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.876 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.876 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.876 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.876 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:43.876 15:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:44.444 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.703 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.962 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.222 00:21:45.222 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.222 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.222 15:40:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.482 { 00:21:45.482 "cntlid": 125, 00:21:45.482 "qid": 0, 00:21:45.482 "state": "enabled", 00:21:45.482 "thread": "nvmf_tgt_poll_group_000", 00:21:45.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:45.482 "listen_address": { 00:21:45.482 "trtype": "RDMA", 00:21:45.482 "adrfam": "IPv4", 00:21:45.482 "traddr": "192.168.100.8", 00:21:45.482 "trsvcid": "4420" 00:21:45.482 }, 00:21:45.482 "peer_address": { 00:21:45.482 "trtype": "RDMA", 00:21:45.482 "adrfam": "IPv4", 00:21:45.482 "traddr": "192.168.100.8", 00:21:45.482 "trsvcid": "33052" 00:21:45.482 }, 00:21:45.482 "auth": { 00:21:45.482 "state": "completed", 00:21:45.482 "digest": "sha512", 00:21:45.482 "dhgroup": "ffdhe4096" 00:21:45.482 } 00:21:45.482 } 00:21:45.482 ]' 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.482 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.741 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:45.741 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:46.309 15:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.309 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.569 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.828 00:21:46.828 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.828 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.828 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.088 { 00:21:47.088 "cntlid": 127, 00:21:47.088 "qid": 0, 00:21:47.088 "state": "enabled", 00:21:47.088 "thread": "nvmf_tgt_poll_group_000", 00:21:47.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:47.088 "listen_address": { 00:21:47.088 "trtype": "RDMA", 00:21:47.088 "adrfam": "IPv4", 00:21:47.088 "traddr": "192.168.100.8", 00:21:47.088 "trsvcid": "4420" 00:21:47.088 }, 00:21:47.088 "peer_address": { 00:21:47.088 "trtype": "RDMA", 00:21:47.088 "adrfam": "IPv4", 00:21:47.088 "traddr": "192.168.100.8", 00:21:47.088 "trsvcid": "33054" 00:21:47.088 }, 00:21:47.088 "auth": { 00:21:47.088 "state": "completed", 00:21:47.088 "digest": "sha512", 00:21:47.088 "dhgroup": "ffdhe4096" 00:21:47.088 } 00:21:47.088 } 00:21:47.088 ]' 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.088 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.347 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.347 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.347 15:40:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.347 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:47.347 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:47.915 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.175 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.434 15:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.434 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.694 00:21:48.694 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.694 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.694 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.953 { 00:21:48.953 "cntlid": 129, 00:21:48.953 "qid": 0, 00:21:48.953 "state": "enabled", 00:21:48.953 "thread": "nvmf_tgt_poll_group_000", 00:21:48.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:48.953 "listen_address": { 00:21:48.953 "trtype": "RDMA", 00:21:48.953 "adrfam": "IPv4", 00:21:48.953 "traddr": "192.168.100.8", 00:21:48.953 "trsvcid": "4420" 00:21:48.953 }, 00:21:48.953 "peer_address": { 00:21:48.953 "trtype": "RDMA", 00:21:48.953 "adrfam": "IPv4", 00:21:48.953 "traddr": "192.168.100.8", 00:21:48.953 "trsvcid": "39796" 00:21:48.953 }, 00:21:48.953 "auth": { 00:21:48.953 "state": "completed", 00:21:48.953 "digest": "sha512", 00:21:48.953 "dhgroup": "ffdhe6144" 00:21:48.953 } 00:21:48.953 } 00:21:48.953 ]' 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.953 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.212 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:49.212 15:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:49.781 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.040 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.299 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.299 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.299 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.299 15:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.559 00:21:50.559 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.559 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.559 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.819 { 00:21:50.819 "cntlid": 131, 00:21:50.819 "qid": 0, 00:21:50.819 "state": "enabled", 00:21:50.819 "thread": "nvmf_tgt_poll_group_000", 00:21:50.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:50.819 "listen_address": { 00:21:50.819 "trtype": "RDMA", 00:21:50.819 "adrfam": "IPv4", 00:21:50.819 "traddr": "192.168.100.8", 00:21:50.819 "trsvcid": "4420" 00:21:50.819 }, 00:21:50.819 "peer_address": { 00:21:50.819 "trtype": "RDMA", 00:21:50.819 "adrfam": "IPv4", 00:21:50.819 "traddr": "192.168.100.8", 00:21:50.819 "trsvcid": "54312" 00:21:50.819 }, 00:21:50.819 "auth": { 00:21:50.819 "state": "completed", 00:21:50.819 "digest": "sha512", 00:21:50.819 "dhgroup": "ffdhe6144" 00:21:50.819 } 00:21:50.819 } 00:21:50.819 ]' 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.819 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.078 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:51.078 15:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:51.646 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.646 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.646 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.906 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.473 00:21:52.473 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.473 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.473 15:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.473 { 00:21:52.473 "cntlid": 133, 00:21:52.473 "qid": 0, 00:21:52.473 "state": "enabled", 00:21:52.473 "thread": "nvmf_tgt_poll_group_000", 00:21:52.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:52.473 "listen_address": { 00:21:52.473 "trtype": "RDMA", 00:21:52.473 "adrfam": "IPv4", 00:21:52.473 "traddr": "192.168.100.8", 00:21:52.473 "trsvcid": "4420" 00:21:52.473 }, 00:21:52.473 "peer_address": { 00:21:52.473 "trtype": "RDMA", 00:21:52.473 "adrfam": "IPv4", 00:21:52.473 "traddr": "192.168.100.8", 00:21:52.473 "trsvcid": "53344" 00:21:52.473 }, 00:21:52.473 "auth": { 00:21:52.473 "state": "completed", 00:21:52.473 "digest": "sha512", 00:21:52.473 "dhgroup": "ffdhe6144" 00:21:52.473 } 00:21:52.473 } 00:21:52.473 ]' 00:21:52.473 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.731 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.989 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:52.989 15:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.555 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.814 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.073 00:21:54.073 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.073 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.073 15:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.332 { 00:21:54.332 "cntlid": 135, 00:21:54.332 "qid": 0, 00:21:54.332 "state": "enabled", 00:21:54.332 "thread": "nvmf_tgt_poll_group_000", 00:21:54.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.332 "listen_address": { 00:21:54.332 "trtype": "RDMA", 00:21:54.332 "adrfam": "IPv4", 00:21:54.332 "traddr": "192.168.100.8", 00:21:54.332 "trsvcid": "4420" 00:21:54.332 }, 00:21:54.332 "peer_address": { 00:21:54.332 "trtype": "RDMA", 00:21:54.332 "adrfam": "IPv4", 00:21:54.332 "traddr": "192.168.100.8", 00:21:54.332 "trsvcid": "54772" 00:21:54.332 }, 00:21:54.332 "auth": { 00:21:54.332 "state": "completed", 00:21:54.332 "digest": "sha512", 00:21:54.332 "dhgroup": "ffdhe6144" 00:21:54.332 } 00:21:54.332 } 00:21:54.332 ]' 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.332 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.592 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.592 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.592 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.592 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:54.592 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:21:55.532 15:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.532 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.145 00:21:56.145 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.145 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.145 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.403 { 00:21:56.403 "cntlid": 137, 00:21:56.403 "qid": 0, 00:21:56.403 "state": "enabled", 00:21:56.403 "thread": "nvmf_tgt_poll_group_000", 00:21:56.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:56.403 "listen_address": { 00:21:56.403 "trtype": "RDMA", 00:21:56.403 "adrfam": "IPv4", 00:21:56.403 "traddr": "192.168.100.8", 00:21:56.403 "trsvcid": "4420" 00:21:56.403 }, 00:21:56.403 "peer_address": { 00:21:56.403 "trtype": "RDMA", 00:21:56.403 "adrfam": "IPv4", 00:21:56.403 "traddr": "192.168.100.8", 00:21:56.403 "trsvcid": "43413" 00:21:56.403 }, 00:21:56.403 "auth": { 00:21:56.403 "state": "completed", 00:21:56.403 "digest": "sha512", 00:21:56.403 "dhgroup": "ffdhe8192" 00:21:56.403 } 00:21:56.403 } 00:21:56.403 ]' 00:21:56.403 15:40:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.403 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.662 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:56.662 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:21:57.232 15:40:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.232 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.232 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.232 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.492 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.493 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.493 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.061 00:21:58.061 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.061 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.061 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.323 { 00:21:58.323 "cntlid": 139, 00:21:58.323 "qid": 0, 00:21:58.323 "state": "enabled", 00:21:58.323 "thread": "nvmf_tgt_poll_group_000", 00:21:58.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:58.323 "listen_address": { 00:21:58.323 "trtype": "RDMA", 00:21:58.323 "adrfam": "IPv4", 00:21:58.323 "traddr": "192.168.100.8", 00:21:58.323 "trsvcid": "4420" 00:21:58.323 }, 00:21:58.323 "peer_address": { 00:21:58.323 "trtype": "RDMA", 00:21:58.323 "adrfam": "IPv4", 00:21:58.323 "traddr": "192.168.100.8", 00:21:58.323 "trsvcid": "59900" 00:21:58.323 }, 00:21:58.323 "auth": { 00:21:58.323 "state": "completed", 00:21:58.323 "digest": "sha512", 00:21:58.323 "dhgroup": "ffdhe8192" 00:21:58.323 } 00:21:58.323 } 00:21:58.323 ]' 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.323 15:40:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.323 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.323 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.323 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.323 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.323 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.583 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:58.583 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: --dhchap-ctrl-secret DHHC-1:02:Y2Y3Nzc1MTNmZGM0YjdhMzJlNWRjZmQ3MTEyMjRhOTg4ZGIyMDNhMDZhZWRjMzhiC2h+Xw==: 00:21:59.152 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.411 15:40:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.411 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.412 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.980 00:21:59.980 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.980 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.980 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.241 { 00:22:00.241 "cntlid": 141, 00:22:00.241 "qid": 0, 00:22:00.241 "state": "enabled", 00:22:00.241 "thread": "nvmf_tgt_poll_group_000", 00:22:00.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:00.241 "listen_address": { 00:22:00.241 "trtype": "RDMA", 00:22:00.241 "adrfam": "IPv4", 00:22:00.241 "traddr": "192.168.100.8", 00:22:00.241 "trsvcid": "4420" 00:22:00.241 }, 00:22:00.241 "peer_address": { 00:22:00.241 "trtype": "RDMA", 00:22:00.241 "adrfam": "IPv4", 00:22:00.241 "traddr": "192.168.100.8", 00:22:00.241 "trsvcid": "56380" 00:22:00.241 }, 00:22:00.241 "auth": { 00:22:00.241 "state": "completed", 00:22:00.241 "digest": "sha512", 00:22:00.241 "dhgroup": "ffdhe8192" 00:22:00.241 } 00:22:00.241 } 00:22:00.241 ]' 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.241 15:40:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.500 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:22:00.500 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY0ODc1ZWVkMWI3MmJhMDE1ZjJjYjQ3NDNjYWRkNjA32uhH: 00:22:01.068 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.326 15:40:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.326 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.327 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.327 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.895 00:22:01.895 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.895 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.895 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.154 { 00:22:02.154 "cntlid": 143, 00:22:02.154 "qid": 0, 00:22:02.154 "state": "enabled", 00:22:02.154 "thread": "nvmf_tgt_poll_group_000", 00:22:02.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:02.154 "listen_address": { 00:22:02.154 "trtype": "RDMA", 00:22:02.154 "adrfam": "IPv4", 00:22:02.154 "traddr": "192.168.100.8", 00:22:02.154 "trsvcid": "4420" 00:22:02.154 }, 00:22:02.154 "peer_address": { 00:22:02.154 "trtype": "RDMA", 00:22:02.154 "adrfam": "IPv4", 00:22:02.154 "traddr": "192.168.100.8", 00:22:02.154 "trsvcid": "57128" 00:22:02.154 }, 00:22:02.154 "auth": { 00:22:02.154 "state": "completed", 00:22:02.154 "digest": "sha512", 00:22:02.154 "dhgroup": "ffdhe8192" 00:22:02.154 } 00:22:02.154 } 00:22:02.154 ]' 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.154 15:40:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.412 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:02.412 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:02.980 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:03.239 15:40:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.499 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.758 00:22:03.758 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.758 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.758 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.017 { 00:22:04.017 "cntlid": 145, 00:22:04.017 "qid": 0, 00:22:04.017 "state": "enabled", 00:22:04.017 "thread": "nvmf_tgt_poll_group_000", 00:22:04.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:04.017 "listen_address": { 00:22:04.017 "trtype": "RDMA", 00:22:04.017 "adrfam": "IPv4", 00:22:04.017 "traddr": "192.168.100.8", 00:22:04.017 "trsvcid": "4420" 00:22:04.017 }, 00:22:04.017 "peer_address": { 00:22:04.017 "trtype": "RDMA", 00:22:04.017 "adrfam": "IPv4", 00:22:04.017 "traddr": "192.168.100.8", 00:22:04.017 "trsvcid": "53473" 00:22:04.017 }, 00:22:04.017 "auth": { 00:22:04.017 "state": "completed", 00:22:04.017 "digest": "sha512", 00:22:04.017 "dhgroup": "ffdhe8192" 00:22:04.017 } 00:22:04.017 } 00:22:04.017 ]' 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.017 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.276 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.276 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.276 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.276 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.276 15:40:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.276 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:22:04.276 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MmYyMDg4NGJiZDFmMzc0OTZjMmU4ZjNkMDQzMjQwZjFkYmQyNmQ3MzcwYTA1ZmQ4uvtkDg==: --dhchap-ctrl-secret DHHC-1:03:OTU2ZGEyYjRlNzcyNzc4ZjFkNWQzZThiODlhNzM2ZWU3ZGU2N2JkYzJlM2ZhOWE1Nzg4OGJmOWU2YjkyOGIyNVY7rVk=: 00:22:05.213 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.213 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.213 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.213 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.214 15:40:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.472 request: 00:22:05.472 { 00:22:05.472 "name": "nvme0", 00:22:05.472 "trtype": "rdma", 00:22:05.472 "traddr": "192.168.100.8", 00:22:05.472 "adrfam": "ipv4", 00:22:05.472 "trsvcid": "4420", 00:22:05.472 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:05.472 "prchk_reftag": false, 00:22:05.472 "prchk_guard": false, 00:22:05.472 "hdgst": false, 00:22:05.472 "ddgst": false, 00:22:05.472 "dhchap_key": "key2", 00:22:05.472 "allow_unrecognized_csi": false, 00:22:05.472 "method": "bdev_nvme_attach_controller", 00:22:05.472 "req_id": 1 00:22:05.472 } 00:22:05.472 Got JSON-RPC error response 00:22:05.472 response: 00:22:05.472 { 00:22:05.472 "code": -5, 00:22:05.472 "message": "Input/output error" 00:22:05.472 } 00:22:05.472 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.473 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.732 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.991 request: 00:22:05.991 { 00:22:05.991 "name": "nvme0", 00:22:05.991 "trtype": "rdma", 00:22:05.991 "traddr": "192.168.100.8", 00:22:05.991 "adrfam": "ipv4", 00:22:05.991 "trsvcid": "4420", 00:22:05.991 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:05.991 "prchk_reftag": false, 00:22:05.991 "prchk_guard": false, 00:22:05.991 "hdgst": false, 00:22:05.991 "ddgst": false, 00:22:05.991 "dhchap_key": "key1", 00:22:05.991 "dhchap_ctrlr_key": "ckey2", 00:22:05.991 "allow_unrecognized_csi": false, 00:22:05.991 "method": "bdev_nvme_attach_controller", 00:22:05.991 "req_id": 1 00:22:05.991 } 00:22:05.991 Got JSON-RPC error response 00:22:05.991 response: 00:22:05.991 { 00:22:05.991 "code": -5, 00:22:05.991 "message": "Input/output error" 00:22:05.991 } 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.251 15:40:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.510 request: 00:22:06.510 { 00:22:06.510 "name": "nvme0", 00:22:06.510 "trtype": "rdma", 00:22:06.510 "traddr": "192.168.100.8", 00:22:06.510 "adrfam": "ipv4", 00:22:06.510 "trsvcid": "4420", 00:22:06.510 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:06.510 "prchk_reftag": false, 00:22:06.510 "prchk_guard": false, 00:22:06.510 "hdgst": false, 00:22:06.510 "ddgst": false, 00:22:06.510 "dhchap_key": "key1", 00:22:06.510 "dhchap_ctrlr_key": "ckey1", 00:22:06.510 "allow_unrecognized_csi": false, 00:22:06.510 "method": "bdev_nvme_attach_controller", 00:22:06.510 "req_id": 1 00:22:06.510 } 00:22:06.510 Got JSON-RPC error response 00:22:06.510 response: 00:22:06.510 { 00:22:06.510 "code": -5, 00:22:06.510 "message": "Input/output error" 00:22:06.510 } 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2298927 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2298927 ']' 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2298927 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:06.510 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2298927 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2298927' 00:22:06.769 killing process with pid 2298927 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2298927 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2298927 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2323373 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2323373 00:22:06.769 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2323373 ']' 00:22:06.770 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.770 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:06.770 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.770 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:06.770 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2323373 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2323373 ']' 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:07.029 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.288 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:07.288 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:07.288 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:07.288 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.288 15:40:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.288 null0 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yjG 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CyA ]] 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CyA 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dMD 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.547 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.NN6 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NN6 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1SL 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.WdO ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WdO 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gsg 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.548 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.486 nvme0n1 00:22:08.486 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.486 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.486 15:40:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.486 { 00:22:08.486 "cntlid": 1, 00:22:08.486 "qid": 0, 00:22:08.486 "state": "enabled", 00:22:08.486 "thread": "nvmf_tgt_poll_group_000", 00:22:08.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:08.486 "listen_address": { 00:22:08.486 "trtype": "RDMA", 00:22:08.486 "adrfam": "IPv4", 00:22:08.486 "traddr": "192.168.100.8", 00:22:08.486 "trsvcid": "4420" 00:22:08.486 }, 00:22:08.486 "peer_address": { 00:22:08.486 "trtype": "RDMA", 00:22:08.486 "adrfam": "IPv4", 00:22:08.486 "traddr": "192.168.100.8", 00:22:08.486 "trsvcid": "51405" 00:22:08.486 }, 00:22:08.486 "auth": { 00:22:08.486 "state": "completed", 00:22:08.486 "digest": "sha512", 00:22:08.486 "dhgroup": "ffdhe8192" 00:22:08.486 } 00:22:08.486 } 00:22:08.486 ]' 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.486 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.745 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.745 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.745 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.745 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:08.745 15:40:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:09.314 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:09.573 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.833 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.092 request: 00:22:10.092 { 00:22:10.092 "name": "nvme0", 00:22:10.092 "trtype": "rdma", 00:22:10.092 "traddr": "192.168.100.8", 00:22:10.092 "adrfam": "ipv4", 00:22:10.092 "trsvcid": "4420", 00:22:10.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:10.092 "prchk_reftag": false, 00:22:10.092 "prchk_guard": false, 00:22:10.092 "hdgst": false, 00:22:10.092 "ddgst": false, 00:22:10.092 "dhchap_key": "key3", 00:22:10.092 "allow_unrecognized_csi": false, 00:22:10.092 "method": "bdev_nvme_attach_controller", 00:22:10.092 "req_id": 1 00:22:10.092 } 00:22:10.092 Got JSON-RPC error response 00:22:10.092 response: 00:22:10.092 { 00:22:10.092 "code": -5, 00:22:10.092 "message": "Input/output error" 00:22:10.092 } 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.092 15:40:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.351 request: 00:22:10.351 { 00:22:10.351 "name": "nvme0", 00:22:10.351 "trtype": "rdma", 00:22:10.351 "traddr": "192.168.100.8", 00:22:10.351 "adrfam": "ipv4", 00:22:10.351 "trsvcid": "4420", 00:22:10.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:10.351 "prchk_reftag": false, 00:22:10.351 "prchk_guard": false, 00:22:10.351 "hdgst": false, 00:22:10.351 "ddgst": false, 00:22:10.351 "dhchap_key": "key3", 00:22:10.351 "allow_unrecognized_csi": false, 00:22:10.351 "method": "bdev_nvme_attach_controller", 00:22:10.351 "req_id": 1 00:22:10.351 } 00:22:10.351 Got JSON-RPC error response 00:22:10.351 response: 00:22:10.351 { 00:22:10.351 "code": -5, 00:22:10.351 "message": "Input/output error" 00:22:10.351 } 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.351 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.352 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.352 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.611 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.180 request: 00:22:11.180 { 00:22:11.180 "name": "nvme0", 00:22:11.180 "trtype": "rdma", 00:22:11.180 "traddr": "192.168.100.8", 00:22:11.180 "adrfam": "ipv4", 00:22:11.180 "trsvcid": "4420", 00:22:11.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:11.180 "prchk_reftag": false, 00:22:11.180 "prchk_guard": false, 00:22:11.180 "hdgst": false, 00:22:11.180 "ddgst": false, 00:22:11.180 "dhchap_key": "key0", 00:22:11.180 "dhchap_ctrlr_key": "key1", 00:22:11.180 "allow_unrecognized_csi": false, 00:22:11.180 "method": "bdev_nvme_attach_controller", 00:22:11.180 "req_id": 1 00:22:11.180 } 00:22:11.180 Got JSON-RPC error response 00:22:11.180 response: 00:22:11.180 { 00:22:11.180 "code": -5, 00:22:11.180 "message": "Input/output error" 00:22:11.180 } 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:11.180 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:11.180 nvme0n1 00:22:11.181 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:11.181 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:11.181 15:40:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.440 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.440 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.440 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.699 15:40:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:12.637 nvme0n1 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.637 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:12.897 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.897 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:12.897 15:40:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: --dhchap-ctrl-secret DHHC-1:03:MWNjOWJjMWU1MGI4MTA2M2I5ZTkxMTVmNDk2ODliNzVhMDA4NzEwMWM4YzkzNzkwOTdlOGIzMDhmYjMwNDc2MTOScI4=: 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.465 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.724 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.984 request: 00:22:13.984 { 00:22:13.984 "name": "nvme0", 00:22:13.984 "trtype": "rdma", 00:22:13.984 "traddr": "192.168.100.8", 00:22:13.984 "adrfam": "ipv4", 00:22:13.984 "trsvcid": "4420", 00:22:13.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:13.984 "prchk_reftag": false, 00:22:13.984 "prchk_guard": false, 00:22:13.984 "hdgst": false, 00:22:13.984 "ddgst": false, 00:22:13.984 "dhchap_key": "key1", 00:22:13.984 "allow_unrecognized_csi": false, 00:22:13.984 "method": "bdev_nvme_attach_controller", 00:22:13.984 "req_id": 1 00:22:13.984 } 00:22:13.984 Got JSON-RPC error response 00:22:13.984 response: 00:22:13.984 { 00:22:13.984 "code": -5, 00:22:13.984 "message": "Input/output error" 00:22:13.984 } 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.243 15:40:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.812 nvme0n1 00:22:14.812 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:14.812 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.812 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:15.070 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.071 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.071 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:15.330 15:40:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:15.589 nvme0n1 00:22:15.589 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:15.589 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.589 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:15.589 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: '' 2s 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: ]] 00:22:15.848 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWVjYjIwMjY1MTY0MWI5ZDg0ZDc2ODdmMzFhNTNiNzasEumW: 00:22:15.849 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:15.849 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:15.849 15:40:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: 2s 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: ]] 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWViNDM5NjZjNjM3NWI0ZGU5NjMxMTM5NWQzMmY1ZTYzZjQ5MDIyNWMzOTI2YjZk1lvqhQ==: 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:18.383 15:40:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.286 15:40:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.854 nvme0n1 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.854 15:40:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:21.423 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:21.682 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:21.682 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.682 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.941 15:40:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:22.509 request: 00:22:22.509 { 00:22:22.509 "name": "nvme0", 00:22:22.509 "dhchap_key": "key1", 00:22:22.509 "dhchap_ctrlr_key": "key3", 00:22:22.509 "method": "bdev_nvme_set_keys", 00:22:22.509 "req_id": 1 00:22:22.509 } 00:22:22.509 Got JSON-RPC error response 00:22:22.509 response: 00:22:22.509 { 00:22:22.509 "code": -13, 00:22:22.509 "message": "Permission denied" 00:22:22.509 } 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:22.509 15:41:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:23.887 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:23.887 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.888 15:41:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.456 nvme0n1 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.456 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:25.025 request: 00:22:25.025 { 00:22:25.025 "name": "nvme0", 00:22:25.025 "dhchap_key": "key2", 00:22:25.025 "dhchap_ctrlr_key": "key0", 00:22:25.025 "method": "bdev_nvme_set_keys", 00:22:25.025 "req_id": 1 00:22:25.025 } 00:22:25.025 Got JSON-RPC error response 00:22:25.025 response: 00:22:25.025 { 00:22:25.025 "code": -13, 00:22:25.025 "message": "Permission denied" 00:22:25.025 } 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.025 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:25.284 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:25.284 15:41:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:26.234 15:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:26.234 15:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:26.234 15:41:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2298953 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2298953 ']' 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2298953 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2298953 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2298953' 00:22:26.546 killing process with pid 2298953 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2298953 00:22:26.546 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2298953 00:22:26.844 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:26.844 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:26.845 rmmod nvme_rdma 00:22:26.845 rmmod nvme_fabrics 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2323373 ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2323373 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2323373 ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2323373 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2323373 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2323373' 00:22:26.845 killing process with pid 2323373 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2323373 00:22:26.845 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2323373 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yjG /tmp/spdk.key-sha256.dMD /tmp/spdk.key-sha384.1SL /tmp/spdk.key-sha512.gsg /tmp/spdk.key-sha512.CyA /tmp/spdk.key-sha384.NN6 /tmp/spdk.key-sha256.WdO '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:22:27.104 00:22:27.104 real 2m41.242s 00:22:27.104 user 6m10.080s 00:22:27.104 sys 0m23.873s 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.104 ************************************ 00:22:27.104 END TEST nvmf_auth_target 00:22:27.104 ************************************ 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.104 ************************************ 00:22:27.104 START TEST nvmf_fuzz 00:22:27.104 ************************************ 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:27.104 * Looking for test storage... 00:22:27.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:22:27.104 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:27.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.364 --rc genhtml_branch_coverage=1 00:22:27.364 --rc genhtml_function_coverage=1 00:22:27.364 --rc genhtml_legend=1 00:22:27.364 --rc geninfo_all_blocks=1 00:22:27.364 --rc geninfo_unexecuted_blocks=1 00:22:27.364 00:22:27.364 ' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:27.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.364 --rc genhtml_branch_coverage=1 00:22:27.364 --rc genhtml_function_coverage=1 00:22:27.364 --rc genhtml_legend=1 00:22:27.364 --rc geninfo_all_blocks=1 00:22:27.364 --rc geninfo_unexecuted_blocks=1 00:22:27.364 00:22:27.364 ' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:27.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.364 --rc genhtml_branch_coverage=1 00:22:27.364 --rc genhtml_function_coverage=1 00:22:27.364 --rc genhtml_legend=1 00:22:27.364 --rc geninfo_all_blocks=1 00:22:27.364 --rc geninfo_unexecuted_blocks=1 00:22:27.364 00:22:27.364 ' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:27.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.364 --rc genhtml_branch_coverage=1 00:22:27.364 --rc genhtml_function_coverage=1 00:22:27.364 --rc genhtml_legend=1 00:22:27.364 --rc geninfo_all_blocks=1 00:22:27.364 --rc geninfo_unexecuted_blocks=1 00:22:27.364 00:22:27.364 ' 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.364 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.365 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.365 15:41:04 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.365 15:41:05 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.937 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:33.938 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:33.938 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:33.938 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:33.938 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:33.938 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:33.938 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:33.938 altname enp217s0f0np0 00:22:33.938 altname ens818f0np0 00:22:33.938 inet 192.168.100.8/24 scope global mlx_0_0 00:22:33.938 valid_lft forever preferred_lft forever 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:33.938 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:33.938 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:33.938 altname enp217s0f1np1 00:22:33.938 altname ens818f1np1 00:22:33.938 inet 192.168.100.9/24 scope global mlx_0_1 00:22:33.938 valid_lft forever preferred_lft forever 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:33.938 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:34.198 192.168.100.9' 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:34.198 192.168.100.9' 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:22:34.198 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:34.199 192.168.100.9' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2330136 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2330136 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2330136 ']' 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.199 15:41:11 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 Malloc0 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:34.458 15:41:12 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:23:06.542 Fuzzing completed. Shutting down the fuzz application 00:23:06.542 00:23:06.542 Dumping successful admin opcodes: 00:23:06.542 8, 9, 10, 24, 00:23:06.542 Dumping successful io opcodes: 00:23:06.542 0, 9, 00:23:06.542 NS: 0x2000008f1f00 I/O qp, Total commands completed: 975479, total successful commands: 5711, random_seed: 738527744 00:23:06.542 NS: 0x2000008f1f00 admin qp, Total commands completed: 132656, total successful commands: 1076, random_seed: 2231905280 00:23:06.542 15:41:42 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:06.542 Fuzzing completed. Shutting down the fuzz application 00:23:06.542 00:23:06.542 Dumping successful admin opcodes: 00:23:06.542 24, 00:23:06.542 Dumping successful io opcodes: 00:23:06.542 00:23:06.542 NS: 0x2000008f1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1293259532 00:23:06.542 NS: 0x2000008f1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1293321622 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:06.543 rmmod nvme_rdma 00:23:06.543 rmmod nvme_fabrics 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2330136 ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2330136 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2330136 ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 2330136 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2330136 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2330136' 00:23:06.543 killing process with pid 2330136 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 2330136 00:23:06.543 15:41:43 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 2330136 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:06.543 00:23:06.543 real 0m39.362s 00:23:06.543 user 0m49.043s 00:23:06.543 sys 0m21.319s 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:06.543 ************************************ 00:23:06.543 END TEST nvmf_fuzz 00:23:06.543 ************************************ 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:06.543 ************************************ 00:23:06.543 START TEST nvmf_multiconnection 00:23:06.543 ************************************ 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:06.543 * Looking for test storage... 00:23:06.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:23:06.543 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:06.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.803 --rc genhtml_branch_coverage=1 00:23:06.803 --rc genhtml_function_coverage=1 00:23:06.803 --rc genhtml_legend=1 00:23:06.803 --rc geninfo_all_blocks=1 00:23:06.803 --rc geninfo_unexecuted_blocks=1 00:23:06.803 00:23:06.803 ' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:06.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.803 --rc genhtml_branch_coverage=1 00:23:06.803 --rc genhtml_function_coverage=1 00:23:06.803 --rc genhtml_legend=1 00:23:06.803 --rc geninfo_all_blocks=1 00:23:06.803 --rc geninfo_unexecuted_blocks=1 00:23:06.803 00:23:06.803 ' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:06.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.803 --rc genhtml_branch_coverage=1 00:23:06.803 --rc genhtml_function_coverage=1 00:23:06.803 --rc genhtml_legend=1 00:23:06.803 --rc geninfo_all_blocks=1 00:23:06.803 --rc geninfo_unexecuted_blocks=1 00:23:06.803 00:23:06.803 ' 00:23:06.803 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:06.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.803 --rc genhtml_branch_coverage=1 00:23:06.803 --rc genhtml_function_coverage=1 00:23:06.804 --rc genhtml_legend=1 00:23:06.804 --rc geninfo_all_blocks=1 00:23:06.804 --rc geninfo_unexecuted_blocks=1 00:23:06.804 00:23:06.804 ' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.804 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.804 15:41:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:13.373 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:13.373 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:13.373 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:13.373 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:13.373 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:13.374 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:13.374 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:13.374 altname enp217s0f0np0 00:23:13.374 altname ens818f0np0 00:23:13.374 inet 192.168.100.8/24 scope global mlx_0_0 00:23:13.374 valid_lft forever preferred_lft forever 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:13.374 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:13.374 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:13.374 altname enp217s0f1np1 00:23:13.374 altname ens818f1np1 00:23:13.374 inet 192.168.100.9/24 scope global mlx_0_1 00:23:13.374 valid_lft forever preferred_lft forever 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:13.374 192.168.100.9' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:13.374 192.168.100.9' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:13.374 192.168.100.9' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2338851 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2338851 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 2338851 ']' 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.374 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 [2024-11-03 15:41:50.758663] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:23:13.375 [2024-11-03 15:41:50.758712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.375 [2024-11-03 15:41:50.834345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.375 [2024-11-03 15:41:50.857468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.375 [2024-11-03 15:41:50.857507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.375 [2024-11-03 15:41:50.857516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.375 [2024-11-03 15:41:50.857524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.375 [2024-11-03 15:41:50.857530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.375 [2024-11-03 15:41:50.859302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.375 [2024-11-03 15:41:50.859397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.375 [2024-11-03 15:41:50.859463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.375 [2024-11-03 15:41:50.859465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.375 15:41:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.375 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:13.375 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.375 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 [2024-11-03 15:41:51.031641] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8b5c50/0x8ba100) succeed. 00:23:13.375 [2024-11-03 15:41:51.040660] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8b7290/0x8fb7a0) succeed. 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 Malloc1 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 [2024-11-03 15:41:51.221538] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 Malloc2 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 Malloc3 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 Malloc4 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 Malloc5 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.635 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.636 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.636 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:13.636 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.636 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 Malloc6 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 Malloc7 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 Malloc8 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 Malloc9 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.896 Malloc10 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.896 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 Malloc11 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.897 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:14.155 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.155 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:14.155 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.155 15:41:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:15.092 15:41:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:15.092 15:41:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:15.092 15:41:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.092 15:41:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:15.092 15:41:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.997 15:41:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:23:17.933 15:41:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:17.933 15:41:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:17.933 15:41:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:17.933 15:41:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:17.933 15:41:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.467 15:41:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:23:21.035 15:41:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:21.035 15:41:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:21.035 15:41:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.035 15:41:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:21.035 15:41:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:22.947 15:42:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:23:24.324 15:42:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:24.324 15:42:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:24.324 15:42:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:24.324 15:42:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:24.324 15:42:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:26.228 15:42:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:27.165 15:42:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:27.165 15:42:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:27.165 15:42:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:27.165 15:42:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:27.165 15:42:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:29.070 15:42:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:30.007 15:42:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:30.007 15:42:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:30.007 15:42:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:30.007 15:42:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:30.007 15:42:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:32.541 15:42:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:33.109 15:42:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:33.109 15:42:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:33.109 15:42:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:33.109 15:42:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:33.109 15:42:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:35.107 15:42:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:36.051 15:42:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:36.051 15:42:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:36.051 15:42:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.051 15:42:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:36.051 15:42:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:38.585 15:42:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:39.153 15:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:39.153 15:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:39.153 15:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.153 15:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:39.153 15:42:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.058 15:42:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:41.996 15:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:41.996 15:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:41.996 15:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:41.996 15:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:41.996 15:42:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.529 15:42:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:23:45.098 15:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:45.098 15:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:23:45.098 15:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.098 15:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:23:45.098 15:42:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:23:47.635 15:42:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:47.635 [global] 00:23:47.635 thread=1 00:23:47.635 invalidate=1 00:23:47.635 rw=read 00:23:47.635 time_based=1 00:23:47.635 runtime=10 00:23:47.635 ioengine=libaio 00:23:47.635 direct=1 00:23:47.635 bs=262144 00:23:47.635 iodepth=64 00:23:47.635 norandommap=1 00:23:47.635 numjobs=1 00:23:47.635 00:23:47.635 [job0] 00:23:47.635 filename=/dev/nvme0n1 00:23:47.635 [job1] 00:23:47.635 filename=/dev/nvme10n1 00:23:47.635 [job2] 00:23:47.635 filename=/dev/nvme1n1 00:23:47.635 [job3] 00:23:47.635 filename=/dev/nvme2n1 00:23:47.635 [job4] 00:23:47.635 filename=/dev/nvme3n1 00:23:47.635 [job5] 00:23:47.635 filename=/dev/nvme4n1 00:23:47.635 [job6] 00:23:47.635 filename=/dev/nvme5n1 00:23:47.635 [job7] 00:23:47.635 filename=/dev/nvme6n1 00:23:47.635 [job8] 00:23:47.635 filename=/dev/nvme7n1 00:23:47.635 [job9] 00:23:47.635 filename=/dev/nvme8n1 00:23:47.635 [job10] 00:23:47.635 filename=/dev/nvme9n1 00:23:47.635 Could not set queue depth (nvme0n1) 00:23:47.635 Could not set queue depth (nvme10n1) 00:23:47.635 Could not set queue depth (nvme1n1) 00:23:47.635 Could not set queue depth (nvme2n1) 00:23:47.635 Could not set queue depth (nvme3n1) 00:23:47.635 Could not set queue depth (nvme4n1) 00:23:47.635 Could not set queue depth (nvme5n1) 00:23:47.635 Could not set queue depth (nvme6n1) 00:23:47.635 Could not set queue depth (nvme7n1) 00:23:47.635 Could not set queue depth (nvme8n1) 00:23:47.635 Could not set queue depth (nvme9n1) 00:23:47.635 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.635 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.636 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.636 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.636 fio-3.35 00:23:47.636 Starting 11 threads 00:23:59.852 00:23:59.852 job0: (groupid=0, jobs=1): err= 0: pid=2345625: Sun Nov 3 15:42:35 2024 00:23:59.852 read: IOPS=1439, BW=360MiB/s (377MB/s)(3610MiB/10028msec) 00:23:59.852 slat (usec): min=10, max=26319, avg=678.72, stdev=2052.13 00:23:59.852 clat (msec): min=10, max=104, avg=43.73, stdev=16.73 00:23:59.852 lat (msec): min=10, max=104, avg=44.41, stdev=17.08 00:23:59.852 clat percentiles (msec): 00:23:59.852 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:23:59.852 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 39], 00:23:59.852 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 68], 00:23:59.852 | 99.00th=[ 81], 99.50th=[ 83], 99.90th=[ 89], 99.95th=[ 99], 00:23:59.852 | 99.99th=[ 105] 00:23:59.852 bw ( KiB/s): min=218624, max=527872, per=9.03%, avg=368025.60, stdev=133299.45, samples=20 00:23:59.852 iops : min= 854, max= 2062, avg=1437.60, stdev=520.70, samples=20 00:23:59.852 lat (msec) : 20=0.56%, 50=64.10%, 100=35.29%, 250=0.04% 00:23:59.852 cpu : usr=0.42%, sys=5.14%, ctx=2906, majf=0, minf=3659 00:23:59.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:59.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.852 issued rwts: total=14439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.852 job1: (groupid=0, jobs=1): err= 0: pid=2345634: Sun Nov 3 15:42:35 2024 00:23:59.852 read: IOPS=1170, BW=293MiB/s (307MB/s)(2935MiB/10029msec) 00:23:59.852 slat (usec): min=12, max=17421, avg=848.18, stdev=2092.45 00:23:59.852 clat (usec): min=13258, max=94837, avg=53775.94, stdev=11858.61 00:23:59.852 lat (usec): min=13521, max=96845, avg=54624.12, stdev=12162.62 00:23:59.852 clat percentiles (usec): 00:23:59.852 | 1.00th=[30278], 5.00th=[32113], 10.00th=[34341], 20.00th=[46400], 00:23:59.852 | 30.00th=[47449], 40.00th=[48497], 50.00th=[50070], 60.00th=[62129], 00:23:59.852 | 70.00th=[63177], 80.00th=[64226], 90.00th=[66323], 95.00th=[68682], 00:23:59.852 | 99.00th=[80217], 99.50th=[81265], 99.90th=[85459], 99.95th=[88605], 00:23:59.852 | 99.99th=[92799] 00:23:59.852 bw ( KiB/s): min=217600, max=485376, per=7.34%, avg=298880.00, stdev=64254.39, samples=20 00:23:59.852 iops : min= 850, max= 1896, avg=1167.50, stdev=250.99, samples=20 00:23:59.852 lat (msec) : 20=0.30%, 50=49.05%, 100=50.66% 00:23:59.852 cpu : usr=0.54%, sys=5.18%, ctx=2177, majf=0, minf=4097 00:23:59.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:59.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.852 issued rwts: total=11738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.852 job2: (groupid=0, jobs=1): err= 0: pid=2345636: Sun Nov 3 15:42:35 2024 00:23:59.852 read: IOPS=830, BW=208MiB/s (218MB/s)(2086MiB/10050msec) 00:23:59.852 slat (usec): min=12, max=23685, avg=1180.37, stdev=2908.15 00:23:59.852 clat (msec): min=12, max=112, avg=75.84, stdev=10.12 00:23:59.852 lat (msec): min=12, max=112, avg=77.02, stdev=10.57 00:23:59.852 clat percentiles (msec): 00:23:59.852 | 1.00th=[ 47], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 65], 00:23:59.852 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:23:59.852 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:23:59.852 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 111], 00:23:59.852 | 99.99th=[ 113] 00:23:59.852 bw ( KiB/s): min=194048, max=250368, per=5.20%, avg=211942.40, stdev=21509.44, samples=20 00:23:59.853 iops : min= 758, max= 978, avg=827.90, stdev=84.02, samples=20 00:23:59.853 lat (msec) : 20=0.54%, 50=0.60%, 100=98.51%, 250=0.35% 00:23:59.853 cpu : usr=0.34%, sys=4.09%, ctx=1659, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=8342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job3: (groupid=0, jobs=1): err= 0: pid=2345637: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=3352, BW=838MiB/s (879MB/s)(8405MiB/10029msec) 00:23:59.853 slat (usec): min=12, max=8454, avg=295.75, stdev=696.75 00:23:59.853 clat (usec): min=1941, max=56816, avg=18775.65, stdev=6452.94 00:23:59.853 lat (usec): min=1986, max=56853, avg=19071.40, stdev=6561.15 00:23:59.853 clat percentiles (usec): 00:23:59.853 | 1.00th=[13829], 5.00th=[14353], 10.00th=[14615], 20.00th=[14877], 00:23:59.853 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:23:59.853 | 70.00th=[16450], 80.00th=[29230], 90.00th=[30802], 95.00th=[31327], 00:23:59.853 | 99.00th=[33817], 99.50th=[35390], 99.90th=[40633], 99.95th=[48497], 00:23:59.853 | 99.99th=[54264] 00:23:59.853 bw ( KiB/s): min=513536, max=1056256, per=21.08%, avg=859008.00, stdev=252805.06, samples=20 00:23:59.853 iops : min= 2006, max= 4126, avg=3355.50, stdev=987.52, samples=20 00:23:59.853 lat (msec) : 2=0.01%, 4=0.04%, 10=0.15%, 20=78.13%, 50=21.64% 00:23:59.853 lat (msec) : 100=0.04% 00:23:59.853 cpu : usr=0.53%, sys=8.51%, ctx=6619, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=33618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job4: (groupid=0, jobs=1): err= 0: pid=2345638: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=861, BW=215MiB/s (226MB/s)(2166MiB/10051msec) 00:23:59.853 slat (usec): min=13, max=45958, avg=1131.32, stdev=4514.52 00:23:59.853 clat (msec): min=13, max=125, avg=73.04, stdev=15.42 00:23:59.853 lat (msec): min=13, max=128, avg=74.17, stdev=16.21 00:23:59.853 clat percentiles (msec): 00:23:59.853 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 49], 20.00th=[ 64], 00:23:59.853 | 30.00th=[ 66], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:23:59.853 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 86], 00:23:59.853 | 99.00th=[ 105], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 126], 00:23:59.853 | 99.99th=[ 126] 00:23:59.853 bw ( KiB/s): min=188416, max=328336, per=5.40%, avg=220167.20, stdev=37258.75, samples=20 00:23:59.853 iops : min= 736, max= 1282, avg=860.00, stdev=145.46, samples=20 00:23:59.853 lat (msec) : 20=0.69%, 50=11.06%, 100=87.14%, 250=1.11% 00:23:59.853 cpu : usr=0.26%, sys=3.97%, ctx=1761, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=8662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job5: (groupid=0, jobs=1): err= 0: pid=2345639: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=1229, BW=307MiB/s (322MB/s)(3088MiB/10041msec) 00:23:59.853 slat (usec): min=12, max=26293, avg=796.76, stdev=2319.63 00:23:59.853 clat (msec): min=12, max=103, avg=51.18, stdev=14.17 00:23:59.853 lat (msec): min=13, max=108, avg=51.98, stdev=14.54 00:23:59.853 clat percentiles (usec): 00:23:59.853 | 1.00th=[23200], 5.00th=[30278], 10.00th=[31327], 20.00th=[32637], 00:23:59.853 | 30.00th=[46400], 40.00th=[47973], 50.00th=[49546], 60.00th=[61604], 00:23:59.853 | 70.00th=[63177], 80.00th=[64226], 90.00th=[66323], 95.00th=[68682], 00:23:59.853 | 99.00th=[81265], 99.50th=[82314], 99.90th=[86508], 99.95th=[87557], 00:23:59.853 | 99.99th=[99091] 00:23:59.853 bw ( KiB/s): min=220160, max=512000, per=7.72%, avg=314547.20, stdev=89028.24, samples=20 00:23:59.853 iops : min= 860, max= 2000, avg=1228.70, stdev=347.77, samples=20 00:23:59.853 lat (msec) : 20=0.62%, 50=52.36%, 100=47.02%, 250=0.01% 00:23:59.853 cpu : usr=0.47%, sys=5.27%, ctx=2456, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=12350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job6: (groupid=0, jobs=1): err= 0: pid=2345640: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=826, BW=207MiB/s (217MB/s)(2077MiB/10048msec) 00:23:59.853 slat (usec): min=12, max=29777, avg=1200.04, stdev=3350.00 00:23:59.853 clat (msec): min=14, max=109, avg=76.12, stdev= 9.68 00:23:59.853 lat (msec): min=14, max=114, avg=77.32, stdev=10.25 00:23:59.853 clat percentiles (msec): 00:23:59.853 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 65], 00:23:59.853 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:23:59.853 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:23:59.853 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 109], 00:23:59.853 | 99.99th=[ 110] 00:23:59.853 bw ( KiB/s): min=190464, max=252416, per=5.18%, avg=211070.80, stdev=21840.04, samples=20 00:23:59.853 iops : min= 744, max= 986, avg=824.45, stdev=85.24, samples=20 00:23:59.853 lat (msec) : 20=0.26%, 50=0.40%, 100=98.58%, 250=0.76% 00:23:59.853 cpu : usr=0.32%, sys=4.06%, ctx=1544, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=8307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job7: (groupid=0, jobs=1): err= 0: pid=2345641: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=828, BW=207MiB/s (217MB/s)(2082MiB/10049msec) 00:23:59.853 slat (usec): min=12, max=18802, avg=1196.97, stdev=2923.18 00:23:59.853 clat (msec): min=13, max=111, avg=75.96, stdev= 9.54 00:23:59.853 lat (msec): min=14, max=111, avg=77.16, stdev=10.00 00:23:59.853 clat percentiles (msec): 00:23:59.853 | 1.00th=[ 61], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 65], 00:23:59.853 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:23:59.853 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:23:59.853 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 111], 00:23:59.853 | 99.99th=[ 112] 00:23:59.853 bw ( KiB/s): min=193024, max=249344, per=5.19%, avg=211557.15, stdev=21349.16, samples=20 00:23:59.853 iops : min= 754, max= 974, avg=826.35, stdev=83.32, samples=20 00:23:59.853 lat (msec) : 20=0.29%, 50=0.37%, 100=99.21%, 250=0.13% 00:23:59.853 cpu : usr=0.37%, sys=3.95%, ctx=1540, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=8326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job8: (groupid=0, jobs=1): err= 0: pid=2345650: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=1168, BW=292MiB/s (306MB/s)(2930MiB/10028msec) 00:23:59.853 slat (usec): min=12, max=18712, avg=848.91, stdev=2180.77 00:23:59.853 clat (usec): min=12854, max=95314, avg=53856.51, stdev=11798.86 00:23:59.853 lat (usec): min=13121, max=99903, avg=54705.43, stdev=12115.38 00:23:59.853 clat percentiles (usec): 00:23:59.853 | 1.00th=[30278], 5.00th=[32375], 10.00th=[34866], 20.00th=[46924], 00:23:59.853 | 30.00th=[47449], 40.00th=[48497], 50.00th=[50070], 60.00th=[62129], 00:23:59.853 | 70.00th=[63177], 80.00th=[64226], 90.00th=[66323], 95.00th=[68682], 00:23:59.853 | 99.00th=[80217], 99.50th=[81265], 99.90th=[87557], 99.95th=[89654], 00:23:59.853 | 99.99th=[93848] 00:23:59.853 bw ( KiB/s): min=211968, max=478208, per=7.33%, avg=298452.40, stdev=63505.51, samples=20 00:23:59.853 iops : min= 828, max= 1868, avg=1165.80, stdev=248.05, samples=20 00:23:59.853 lat (msec) : 20=0.29%, 50=48.94%, 100=50.77% 00:23:59.853 cpu : usr=0.57%, sys=5.34%, ctx=2202, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=11720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job9: (groupid=0, jobs=1): err= 0: pid=2345651: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=898, BW=225MiB/s (236MB/s)(2255MiB/10040msec) 00:23:59.853 slat (usec): min=13, max=22820, avg=1091.86, stdev=2833.80 00:23:59.853 clat (msec): min=13, max=105, avg=70.07, stdev=15.91 00:23:59.853 lat (msec): min=13, max=105, avg=71.17, stdev=16.36 00:23:59.853 clat percentiles (msec): 00:23:59.853 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 47], 20.00th=[ 49], 00:23:59.853 | 30.00th=[ 64], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 81], 00:23:59.853 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 86], 00:23:59.853 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 103], 99.95th=[ 104], 00:23:59.853 | 99.99th=[ 106] 00:23:59.853 bw ( KiB/s): min=193536, max=338944, per=5.63%, avg=229299.20, stdev=55294.32, samples=20 00:23:59.853 iops : min= 756, max= 1324, avg=895.70, stdev=215.99, samples=20 00:23:59.853 lat (msec) : 20=0.30%, 50=25.07%, 100=74.47%, 250=0.17% 00:23:59.853 cpu : usr=0.30%, sys=4.49%, ctx=1832, majf=0, minf=4097 00:23:59.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:59.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.853 issued rwts: total=9020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.853 job10: (groupid=0, jobs=1): err= 0: pid=2345652: Sun Nov 3 15:42:35 2024 00:23:59.853 read: IOPS=3331, BW=833MiB/s (873MB/s)(8362MiB/10039msec) 00:23:59.853 slat (usec): min=11, max=11614, avg=297.17, stdev=753.53 00:23:59.853 clat (usec): min=12254, max=84914, avg=18893.95, stdev=8794.68 00:23:59.853 lat (usec): min=12295, max=84963, avg=19191.12, stdev=8934.36 00:23:59.853 clat percentiles (usec): 00:23:59.854 | 1.00th=[13042], 5.00th=[13435], 10.00th=[13829], 20.00th=[14615], 00:23:59.854 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:23:59.854 | 70.00th=[15664], 80.00th=[17433], 90.00th=[32113], 95.00th=[35390], 00:23:59.854 | 99.00th=[49546], 99.50th=[51119], 99.90th=[64226], 99.95th=[74974], 00:23:59.854 | 99.99th=[84411] 00:23:59.854 bw ( KiB/s): min=323072, max=1085952, per=20.98%, avg=854637.85, stdev=288132.22, samples=20 00:23:59.854 iops : min= 1262, max= 4242, avg=3338.40, stdev=1125.57, samples=20 00:23:59.854 lat (msec) : 20=80.72%, 50=18.45%, 100=0.83% 00:23:59.854 cpu : usr=0.55%, sys=8.25%, ctx=5961, majf=0, minf=4097 00:23:59.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:59.854 issued rwts: total=33446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:59.854 00:23:59.854 Run status group 0 (all jobs): 00:23:59.854 READ: bw=3979MiB/s (4172MB/s), 207MiB/s-838MiB/s (217MB/s-879MB/s), io=39.1GiB (41.9GB), run=10028-10051msec 00:23:59.854 00:23:59.854 Disk stats (read/write): 00:23:59.854 nvme0n1: ios=28342/0, merge=0/0, ticks=1220786/0, in_queue=1220786, util=96.88% 00:23:59.854 nvme10n1: ios=22946/0, merge=0/0, ticks=1222297/0, in_queue=1222297, util=97.10% 00:23:59.854 nvme1n1: ios=16360/0, merge=0/0, ticks=1222531/0, in_queue=1222531, util=97.45% 00:23:59.854 nvme2n1: ios=66685/0, merge=0/0, ticks=1214032/0, in_queue=1214032, util=97.63% 00:23:59.854 nvme3n1: ios=16981/0, merge=0/0, ticks=1222457/0, in_queue=1222457, util=97.71% 00:23:59.854 nvme4n1: ios=24313/0, merge=0/0, ticks=1222920/0, in_queue=1222920, util=98.11% 00:23:59.854 nvme5n1: ios=16316/0, merge=0/0, ticks=1222516/0, in_queue=1222516, util=98.27% 00:23:59.854 nvme6n1: ios=16332/0, merge=0/0, ticks=1222762/0, in_queue=1222762, util=98.40% 00:23:59.854 nvme7n1: ios=22936/0, merge=0/0, ticks=1223260/0, in_queue=1223260, util=98.90% 00:23:59.854 nvme8n1: ios=17616/0, merge=0/0, ticks=1224136/0, in_queue=1224136, util=99.12% 00:23:59.854 nvme9n1: ios=66512/0, merge=0/0, ticks=1217194/0, in_queue=1217194, util=99.29% 00:23:59.854 15:42:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:59.854 [global] 00:23:59.854 thread=1 00:23:59.854 invalidate=1 00:23:59.854 rw=randwrite 00:23:59.854 time_based=1 00:23:59.854 runtime=10 00:23:59.854 ioengine=libaio 00:23:59.854 direct=1 00:23:59.854 bs=262144 00:23:59.854 iodepth=64 00:23:59.854 norandommap=1 00:23:59.854 numjobs=1 00:23:59.854 00:23:59.854 [job0] 00:23:59.854 filename=/dev/nvme0n1 00:23:59.854 [job1] 00:23:59.854 filename=/dev/nvme10n1 00:23:59.854 [job2] 00:23:59.854 filename=/dev/nvme1n1 00:23:59.854 [job3] 00:23:59.854 filename=/dev/nvme2n1 00:23:59.854 [job4] 00:23:59.854 filename=/dev/nvme3n1 00:23:59.854 [job5] 00:23:59.854 filename=/dev/nvme4n1 00:23:59.854 [job6] 00:23:59.854 filename=/dev/nvme5n1 00:23:59.854 [job7] 00:23:59.854 filename=/dev/nvme6n1 00:23:59.854 [job8] 00:23:59.854 filename=/dev/nvme7n1 00:23:59.854 [job9] 00:23:59.854 filename=/dev/nvme8n1 00:23:59.854 [job10] 00:23:59.854 filename=/dev/nvme9n1 00:23:59.854 Could not set queue depth (nvme0n1) 00:23:59.854 Could not set queue depth (nvme10n1) 00:23:59.854 Could not set queue depth (nvme1n1) 00:23:59.854 Could not set queue depth (nvme2n1) 00:23:59.854 Could not set queue depth (nvme3n1) 00:23:59.854 Could not set queue depth (nvme4n1) 00:23:59.854 Could not set queue depth (nvme5n1) 00:23:59.854 Could not set queue depth (nvme6n1) 00:23:59.854 Could not set queue depth (nvme7n1) 00:23:59.854 Could not set queue depth (nvme8n1) 00:23:59.854 Could not set queue depth (nvme9n1) 00:23:59.854 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:59.854 fio-3.35 00:23:59.854 Starting 11 threads 00:24:09.839 00:24:09.839 job0: (groupid=0, jobs=1): err= 0: pid=2347374: Sun Nov 3 15:42:46 2024 00:24:09.839 write: IOPS=959, BW=240MiB/s (252MB/s)(2413MiB/10055msec); 0 zone resets 00:24:09.839 slat (usec): min=23, max=19105, avg=1000.24, stdev=1882.77 00:24:09.839 clat (msec): min=10, max=128, avg=65.65, stdev= 9.36 00:24:09.839 lat (msec): min=10, max=133, avg=66.65, stdev= 9.52 00:24:09.839 clat percentiles (msec): 00:24:09.839 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:24:09.839 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 71], 00:24:09.839 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:24:09.839 | 99.00th=[ 82], 99.50th=[ 87], 99.90th=[ 120], 99.95th=[ 125], 00:24:09.839 | 99.99th=[ 129] 00:24:09.840 bw ( KiB/s): min=211968, max=286208, per=7.36%, avg=245452.80, stdev=27187.76, samples=20 00:24:09.840 iops : min= 828, max= 1118, avg=958.80, stdev=106.20, samples=20 00:24:09.840 lat (msec) : 20=0.15%, 50=1.88%, 100=97.72%, 250=0.26% 00:24:09.840 cpu : usr=1.94%, sys=3.97%, ctx=2459, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,9651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job1: (groupid=0, jobs=1): err= 0: pid=2347386: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=965, BW=241MiB/s (253MB/s)(2426MiB/10054msec); 0 zone resets 00:24:09.840 slat (usec): min=27, max=17210, avg=1024.95, stdev=1887.48 00:24:09.840 clat (msec): min=21, max=128, avg=65.25, stdev= 8.73 00:24:09.840 lat (msec): min=21, max=128, avg=66.28, stdev= 8.86 00:24:09.840 clat percentiles (msec): 00:24:09.840 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:24:09.840 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:24:09.840 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:24:09.840 | 99.00th=[ 81], 99.50th=[ 83], 99.90th=[ 115], 99.95th=[ 125], 00:24:09.840 | 99.99th=[ 129] 00:24:09.840 bw ( KiB/s): min=211968, max=288256, per=7.40%, avg=246835.20, stdev=27560.98, samples=20 00:24:09.840 iops : min= 828, max= 1126, avg=964.20, stdev=107.66, samples=20 00:24:09.840 lat (msec) : 50=0.80%, 100=98.98%, 250=0.22% 00:24:09.840 cpu : usr=2.43%, sys=4.37%, ctx=2405, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,9705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job2: (groupid=0, jobs=1): err= 0: pid=2347387: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1189, BW=297MiB/s (312MB/s)(2987MiB/10043msec); 0 zone resets 00:24:09.840 slat (usec): min=25, max=9936, avg=831.91, stdev=1490.48 00:24:09.840 clat (usec): min=14049, max=97070, avg=52951.16, stdev=13865.52 00:24:09.840 lat (usec): min=14108, max=97127, avg=53783.07, stdev=14039.57 00:24:09.840 clat percentiles (usec): 00:24:09.840 | 1.00th=[34341], 5.00th=[35914], 10.00th=[36963], 20.00th=[38011], 00:24:09.840 | 30.00th=[38536], 40.00th=[52167], 50.00th=[55313], 60.00th=[57410], 00:24:09.840 | 70.00th=[58459], 80.00th=[66847], 90.00th=[72877], 95.00th=[76022], 00:24:09.840 | 99.00th=[80217], 99.50th=[83362], 99.90th=[88605], 99.95th=[93848], 00:24:09.840 | 99.99th=[96994] 00:24:09.840 bw ( KiB/s): min=212992, max=434176, per=9.12%, avg=304269.40, stdev=78853.39, samples=20 00:24:09.840 iops : min= 832, max= 1696, avg=1188.55, stdev=308.02, samples=20 00:24:09.840 lat (msec) : 20=0.13%, 50=38.70%, 100=61.17% 00:24:09.840 cpu : usr=2.84%, sys=5.10%, ctx=2967, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,11947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job3: (groupid=0, jobs=1): err= 0: pid=2347388: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1527, BW=382MiB/s (400MB/s)(3829MiB/10026msec); 0 zone resets 00:24:09.840 slat (usec): min=21, max=22532, avg=645.03, stdev=1202.81 00:24:09.840 clat (usec): min=9672, max=93519, avg=41238.01, stdev=8982.04 00:24:09.840 lat (usec): min=9745, max=93587, avg=41883.04, stdev=9080.89 00:24:09.840 clat percentiles (usec): 00:24:09.840 | 1.00th=[33817], 5.00th=[34866], 10.00th=[35390], 20.00th=[36439], 00:24:09.840 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:24:09.840 | 70.00th=[39060], 80.00th=[47973], 90.00th=[51643], 95.00th=[65799], 00:24:09.840 | 99.00th=[74974], 99.50th=[76022], 99.90th=[79168], 99.95th=[80217], 00:24:09.840 | 99.99th=[83362] 00:24:09.840 bw ( KiB/s): min=222208, max=438784, per=11.71%, avg=390476.80, stdev=65714.55, samples=20 00:24:09.840 iops : min= 868, max= 1714, avg=1525.30, stdev=256.70, samples=20 00:24:09.840 lat (msec) : 10=0.05%, 20=0.10%, 50=84.09%, 100=15.76% 00:24:09.840 cpu : usr=3.24%, sys=4.76%, ctx=3697, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,15316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job4: (groupid=0, jobs=1): err= 0: pid=2347389: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1005, BW=251MiB/s (264MB/s)(2524MiB/10042msec); 0 zone resets 00:24:09.840 slat (usec): min=22, max=46321, avg=968.08, stdev=1781.88 00:24:09.840 clat (usec): min=40978, max=99032, avg=62659.56, stdev=8406.56 00:24:09.840 lat (msec): min=43, max=116, avg=63.63, stdev= 8.48 00:24:09.840 clat percentiles (usec): 00:24:09.840 | 1.00th=[52167], 5.00th=[53740], 10.00th=[54264], 20.00th=[55313], 00:24:09.840 | 30.00th=[56361], 40.00th=[57410], 50.00th=[58459], 60.00th=[61604], 00:24:09.840 | 70.00th=[68682], 80.00th=[71828], 90.00th=[76022], 95.00th=[77071], 00:24:09.840 | 99.00th=[82314], 99.50th=[84411], 99.90th=[90702], 99.95th=[93848], 00:24:09.840 | 99.99th=[95945] 00:24:09.840 bw ( KiB/s): min=209314, max=289792, per=7.70%, avg=256891.30, stdev=30966.00, samples=20 00:24:09.840 iops : min= 817, max= 1132, avg=1003.45, stdev=121.01, samples=20 00:24:09.840 lat (msec) : 50=0.33%, 100=99.67% 00:24:09.840 cpu : usr=2.23%, sys=4.38%, ctx=2541, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,10097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job5: (groupid=0, jobs=1): err= 0: pid=2347390: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1270, BW=318MiB/s (333MB/s)(3184MiB/10027msec); 0 zone resets 00:24:09.840 slat (usec): min=21, max=22527, avg=780.27, stdev=1419.29 00:24:09.840 clat (usec): min=25351, max=91088, avg=49588.71, stdev=11178.71 00:24:09.840 lat (usec): min=26563, max=94071, avg=50368.98, stdev=11327.22 00:24:09.840 clat percentiles (usec): 00:24:09.840 | 1.00th=[33817], 5.00th=[35390], 10.00th=[36439], 20.00th=[37487], 00:24:09.840 | 30.00th=[38011], 40.00th=[48497], 50.00th=[51643], 60.00th=[54264], 00:24:09.840 | 70.00th=[56361], 80.00th=[57410], 90.00th=[60031], 95.00th=[71828], 00:24:09.840 | 99.00th=[76022], 99.50th=[77071], 99.90th=[79168], 99.95th=[80217], 00:24:09.840 | 99.99th=[90702] 00:24:09.840 bw ( KiB/s): min=220160, max=437248, per=9.73%, avg=324428.80, stdev=73167.13, samples=20 00:24:09.840 iops : min= 860, max= 1708, avg=1267.30, stdev=285.81, samples=20 00:24:09.840 lat (msec) : 50=43.16%, 100=56.84% 00:24:09.840 cpu : usr=2.89%, sys=5.13%, ctx=3168, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,12736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job6: (groupid=0, jobs=1): err= 0: pid=2347391: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1425, BW=356MiB/s (374MB/s)(3577MiB/10036msec); 0 zone resets 00:24:09.840 slat (usec): min=13, max=22019, avg=693.10, stdev=1450.53 00:24:09.840 clat (usec): min=759, max=82383, avg=44191.21, stdev=16564.45 00:24:09.840 lat (usec): min=1059, max=89914, avg=44884.31, stdev=16830.71 00:24:09.840 clat percentiles (usec): 00:24:09.840 | 1.00th=[11731], 5.00th=[17433], 10.00th=[17957], 20.00th=[33817], 00:24:09.840 | 30.00th=[35914], 40.00th=[36963], 50.00th=[50070], 60.00th=[52691], 00:24:09.840 | 70.00th=[55837], 80.00th=[56886], 90.00th=[59507], 95.00th=[70779], 00:24:09.840 | 99.00th=[76022], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:24:09.840 | 99.99th=[81265] 00:24:09.840 bw ( KiB/s): min=220160, max=811520, per=10.93%, avg=364597.45, stdev=159994.99, samples=20 00:24:09.840 iops : min= 860, max= 3170, avg=1424.20, stdev=624.99, samples=20 00:24:09.840 lat (usec) : 1000=0.01% 00:24:09.840 lat (msec) : 2=0.10%, 4=0.21%, 10=0.53%, 20=17.73%, 50=31.08% 00:24:09.840 lat (msec) : 100=50.34% 00:24:09.840 cpu : usr=3.12%, sys=4.79%, ctx=3451, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,14306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.840 job7: (groupid=0, jobs=1): err= 0: pid=2347392: Sun Nov 3 15:42:46 2024 00:24:09.840 write: IOPS=1188, BW=297MiB/s (312MB/s)(2985MiB/10045msec); 0 zone resets 00:24:09.840 slat (usec): min=27, max=10650, avg=832.50, stdev=1498.47 00:24:09.840 clat (msec): min=3, max=101, avg=52.99, stdev=13.96 00:24:09.840 lat (msec): min=3, max=102, avg=53.82, stdev=14.13 00:24:09.840 clat percentiles (msec): 00:24:09.840 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 39], 00:24:09.840 | 30.00th=[ 39], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 58], 00:24:09.840 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 73], 95.00th=[ 77], 00:24:09.840 | 99.00th=[ 81], 99.50th=[ 82], 99.90th=[ 90], 99.95th=[ 95], 00:24:09.840 | 99.99th=[ 102] 00:24:09.840 bw ( KiB/s): min=212992, max=434176, per=9.12%, avg=304076.80, stdev=79001.64, samples=20 00:24:09.840 iops : min= 832, max= 1696, avg=1187.80, stdev=308.60, samples=20 00:24:09.840 lat (msec) : 4=0.01%, 10=0.07%, 20=0.18%, 50=38.72%, 100=61.02% 00:24:09.840 lat (msec) : 250=0.02% 00:24:09.840 cpu : usr=2.72%, sys=5.09%, ctx=2940, majf=0, minf=1 00:24:09.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:09.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.840 issued rwts: total=0,11941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.841 job8: (groupid=0, jobs=1): err= 0: pid=2347393: Sun Nov 3 15:42:46 2024 00:24:09.841 write: IOPS=968, BW=242MiB/s (254MB/s)(2436MiB/10055msec); 0 zone resets 00:24:09.841 slat (usec): min=30, max=11636, avg=1020.85, stdev=1884.91 00:24:09.841 clat (msec): min=4, max=130, avg=65.01, stdev= 9.15 00:24:09.841 lat (msec): min=4, max=130, avg=66.03, stdev= 9.29 00:24:09.841 clat percentiles (msec): 00:24:09.841 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:24:09.841 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:24:09.841 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:24:09.841 | 99.00th=[ 81], 99.50th=[ 84], 99.90th=[ 122], 99.95th=[ 125], 00:24:09.841 | 99.99th=[ 131] 00:24:09.841 bw ( KiB/s): min=212480, max=288256, per=7.43%, avg=247808.00, stdev=27951.74, samples=20 00:24:09.841 iops : min= 830, max= 1126, avg=968.00, stdev=109.19, samples=20 00:24:09.841 lat (msec) : 10=0.11%, 20=0.12%, 50=0.94%, 100=98.55%, 250=0.27% 00:24:09.841 cpu : usr=2.41%, sys=4.42%, ctx=2410, majf=0, minf=1 00:24:09.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:09.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.841 issued rwts: total=0,9743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.841 job9: (groupid=0, jobs=1): err= 0: pid=2347394: Sun Nov 3 15:42:46 2024 00:24:09.841 write: IOPS=944, BW=236MiB/s (248MB/s)(2375MiB/10055msec); 0 zone resets 00:24:09.841 slat (usec): min=26, max=14354, avg=1018.49, stdev=1885.29 00:24:09.841 clat (msec): min=15, max=127, avg=66.70, stdev= 8.89 00:24:09.841 lat (msec): min=15, max=128, avg=67.72, stdev= 9.02 00:24:09.841 clat percentiles (msec): 00:24:09.841 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:24:09.841 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 00:24:09.841 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 79], 00:24:09.841 | 99.00th=[ 82], 99.50th=[ 85], 99.90th=[ 115], 99.95th=[ 120], 00:24:09.841 | 99.99th=[ 129] 00:24:09.841 bw ( KiB/s): min=212992, max=289792, per=7.25%, avg=241589.50, stdev=27362.51, samples=20 00:24:09.841 iops : min= 832, max= 1132, avg=943.70, stdev=106.87, samples=20 00:24:09.841 lat (msec) : 20=0.17%, 50=0.44%, 100=99.16%, 250=0.23% 00:24:09.841 cpu : usr=2.40%, sys=4.15%, ctx=2429, majf=0, minf=1 00:24:09.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:09.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.841 issued rwts: total=0,9499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.841 job10: (groupid=0, jobs=1): err= 0: pid=2347395: Sun Nov 3 15:42:46 2024 00:24:09.841 write: IOPS=1597, BW=399MiB/s (419MB/s)(4007MiB/10034msec); 0 zone resets 00:24:09.841 slat (usec): min=20, max=39158, avg=608.93, stdev=1345.99 00:24:09.841 clat (usec): min=755, max=102122, avg=39442.28, stdev=14471.15 00:24:09.841 lat (usec): min=812, max=102200, avg=40051.21, stdev=14683.04 00:24:09.841 clat percentiles (usec): 00:24:09.841 | 1.00th=[17433], 5.00th=[18482], 10.00th=[19006], 20.00th=[34866], 00:24:09.841 | 30.00th=[35914], 40.00th=[36439], 50.00th=[37487], 60.00th=[37487], 00:24:09.841 | 70.00th=[38536], 80.00th=[39584], 90.00th=[66847], 95.00th=[70779], 00:24:09.841 | 99.00th=[76022], 99.50th=[78119], 99.90th=[81265], 99.95th=[84411], 00:24:09.841 | 99.99th=[88605] 00:24:09.841 bw ( KiB/s): min=224768, max=865792, per=12.26%, avg=408704.00, stdev=138193.97, samples=20 00:24:09.841 iops : min= 878, max= 3382, avg=1596.50, stdev=539.82, samples=20 00:24:09.841 lat (usec) : 1000=0.03% 00:24:09.841 lat (msec) : 2=0.11%, 10=0.05%, 20=13.79%, 50=69.31%, 100=16.71% 00:24:09.841 lat (msec) : 250=0.01% 00:24:09.841 cpu : usr=3.10%, sys=4.71%, ctx=3820, majf=0, minf=1 00:24:09.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:09.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:09.841 issued rwts: total=0,16028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:09.841 00:24:09.841 Run status group 0 (all jobs): 00:24:09.841 WRITE: bw=3256MiB/s (3414MB/s), 236MiB/s-399MiB/s (248MB/s-419MB/s), io=32.0GiB (34.3GB), run=10026-10055msec 00:24:09.841 00:24:09.841 Disk stats (read/write): 00:24:09.841 nvme0n1: ios=49/18974, merge=0/0, ticks=13/1217071, in_queue=1217084, util=96.73% 00:24:09.841 nvme10n1: ios=0/19078, merge=0/0, ticks=0/1212537, in_queue=1212537, util=96.93% 00:24:09.841 nvme1n1: ios=0/23495, merge=0/0, ticks=0/1218183, in_queue=1218183, util=97.24% 00:24:09.841 nvme2n1: ios=0/30089, merge=0/0, ticks=0/1220197, in_queue=1220197, util=97.41% 00:24:09.841 nvme3n1: ios=0/19796, merge=0/0, ticks=0/1217753, in_queue=1217753, util=97.47% 00:24:09.841 nvme4n1: ios=0/24929, merge=0/0, ticks=0/1217598, in_queue=1217598, util=97.86% 00:24:09.841 nvme5n1: ios=0/28063, merge=0/0, ticks=0/1217381, in_queue=1217381, util=98.06% 00:24:09.841 nvme6n1: ios=0/23490, merge=0/0, ticks=0/1218803, in_queue=1218803, util=98.19% 00:24:09.841 nvme7n1: ios=0/19158, merge=0/0, ticks=0/1213264, in_queue=1213264, util=98.62% 00:24:09.841 nvme8n1: ios=0/18660, merge=0/0, ticks=0/1217031, in_queue=1217031, util=98.84% 00:24:09.841 nvme9n1: ios=0/31502, merge=0/0, ticks=0/1220171, in_queue=1220171, util=98.98% 00:24:09.841 15:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:09.841 15:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:09.841 15:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.841 15:42:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:10.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.409 15:42:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:11.346 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:11.346 15:42:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.346 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:12.284 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:12.284 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:12.284 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:12.284 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:12.284 15:42:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.284 15:42:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:13.662 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.662 15:42:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:14.600 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.600 15:42:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:15.538 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.538 15:42:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:16.476 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.476 15:42:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:17.413 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.413 15:42:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:18.792 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.792 15:42:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:19.360 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:19.360 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:19.360 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:19.619 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.620 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.620 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.620 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.620 15:42:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:20.557 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.557 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:20.558 rmmod nvme_rdma 00:24:20.558 rmmod nvme_fabrics 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2338851 ']' 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2338851 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 2338851 ']' 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 2338851 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.558 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2338851 00:24:20.817 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.817 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.817 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2338851' 00:24:20.817 killing process with pid 2338851 00:24:20.817 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 2338851 00:24:20.817 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 2338851 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:21.076 00:24:21.076 real 1m14.601s 00:24:21.076 user 4m53.124s 00:24:21.076 sys 0m19.435s 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.076 ************************************ 00:24:21.076 END TEST nvmf_multiconnection 00:24:21.076 ************************************ 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:21.076 15:42:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:21.337 ************************************ 00:24:21.337 START TEST nvmf_initiator_timeout 00:24:21.337 ************************************ 00:24:21.337 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:21.337 * Looking for test storage... 00:24:21.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:21.337 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:21.337 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:24:21.337 15:42:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.337 --rc genhtml_branch_coverage=1 00:24:21.337 --rc genhtml_function_coverage=1 00:24:21.337 --rc genhtml_legend=1 00:24:21.337 --rc geninfo_all_blocks=1 00:24:21.337 --rc geninfo_unexecuted_blocks=1 00:24:21.337 00:24:21.337 ' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.337 --rc genhtml_branch_coverage=1 00:24:21.337 --rc genhtml_function_coverage=1 00:24:21.337 --rc genhtml_legend=1 00:24:21.337 --rc geninfo_all_blocks=1 00:24:21.337 --rc geninfo_unexecuted_blocks=1 00:24:21.337 00:24:21.337 ' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.337 --rc genhtml_branch_coverage=1 00:24:21.337 --rc genhtml_function_coverage=1 00:24:21.337 --rc genhtml_legend=1 00:24:21.337 --rc geninfo_all_blocks=1 00:24:21.337 --rc geninfo_unexecuted_blocks=1 00:24:21.337 00:24:21.337 ' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.337 --rc genhtml_branch_coverage=1 00:24:21.337 --rc genhtml_function_coverage=1 00:24:21.337 --rc genhtml_legend=1 00:24:21.337 --rc geninfo_all_blocks=1 00:24:21.337 --rc geninfo_unexecuted_blocks=1 00:24:21.337 00:24:21.337 ' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.337 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.338 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.338 15:42:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.495 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:29.496 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:29.496 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:29.496 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:29.496 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:29.496 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:29.496 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:29.496 altname enp217s0f0np0 00:24:29.496 altname ens818f0np0 00:24:29.496 inet 192.168.100.8/24 scope global mlx_0_0 00:24:29.496 valid_lft forever preferred_lft forever 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:29.496 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:29.497 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:29.497 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:29.497 altname enp217s0f1np1 00:24:29.497 altname ens818f1np1 00:24:29.497 inet 192.168.100.9/24 scope global mlx_0_1 00:24:29.497 valid_lft forever preferred_lft forever 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:29.497 192.168.100.9' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:29.497 192.168.100.9' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:29.497 192.168.100.9' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:29.497 15:43:05 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2354134 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2354134 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 2354134 ']' 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.497 [2024-11-03 15:43:06.079622] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:24:29.497 [2024-11-03 15:43:06.079678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.497 [2024-11-03 15:43:06.157435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.497 [2024-11-03 15:43:06.179906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.497 [2024-11-03 15:43:06.179945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.497 [2024-11-03 15:43:06.179955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.497 [2024-11-03 15:43:06.179963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.497 [2024-11-03 15:43:06.179974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.497 [2024-11-03 15:43:06.181614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.497 [2024-11-03 15:43:06.181688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.497 [2024-11-03 15:43:06.181799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.497 [2024-11-03 15:43:06.181800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.497 Malloc0 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.497 Delay0 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.497 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.498 [2024-11-03 15:43:06.386883] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x25bfc10/0x24b2700) succeed. 00:24:29.498 [2024-11-03 15:43:06.396234] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x25c1250/0x24f3da0) succeed. 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:29.498 [2024-11-03 15:43:06.540770] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.498 15:43:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:29.808 15:43:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:29.808 15:43:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:24:29.808 15:43:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.808 15:43:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:24:29.808 15:43:07 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2354736 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:32.361 15:43:09 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:32.361 [global] 00:24:32.361 thread=1 00:24:32.361 invalidate=1 00:24:32.361 rw=write 00:24:32.361 time_based=1 00:24:32.361 runtime=60 00:24:32.361 ioengine=libaio 00:24:32.361 direct=1 00:24:32.361 bs=4096 00:24:32.361 iodepth=1 00:24:32.361 norandommap=0 00:24:32.361 numjobs=1 00:24:32.361 00:24:32.361 verify_dump=1 00:24:32.361 verify_backlog=512 00:24:32.361 verify_state_save=0 00:24:32.361 do_verify=1 00:24:32.361 verify=crc32c-intel 00:24:32.361 [job0] 00:24:32.361 filename=/dev/nvme0n1 00:24:32.361 Could not set queue depth (nvme0n1) 00:24:32.361 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:32.361 fio-3.35 00:24:32.361 Starting 1 thread 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.897 true 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.897 true 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.897 true 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:34.897 true 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.897 15:43:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.186 true 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.186 true 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.186 true 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:38.186 true 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:38.186 15:43:15 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2354736 00:25:34.423 00:25:34.423 job0: (groupid=0, jobs=1): err= 0: pid=2354999: Sun Nov 3 15:44:10 2024 00:25:34.423 read: IOPS=1239, BW=4959KiB/s (5078kB/s)(291MiB/60000msec) 00:25:34.423 slat (nsec): min=4004, max=41713, avg=9315.13, stdev=1106.96 00:25:34.423 clat (usec): min=69, max=313, avg=105.06, stdev= 6.39 00:25:34.423 lat (usec): min=93, max=322, avg=114.37, stdev= 6.46 00:25:34.423 clat percentiles (usec): 00:25:34.423 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 100], 00:25:34.423 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:25:34.423 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 116], 00:25:34.423 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 128], 99.95th=[ 133], 00:25:34.423 | 99.99th=[ 210] 00:25:34.423 write: IOPS=1245, BW=4983KiB/s (5103kB/s)(292MiB/60000msec); 0 zone resets 00:25:34.423 slat (usec): min=3, max=3243, avg=12.12, stdev=12.65 00:25:34.423 clat (usec): min=39, max=42512k, avg=671.52, stdev=155487.44 00:25:34.423 lat (usec): min=91, max=42512k, avg=683.64, stdev=155487.43 00:25:34.423 clat percentiles (usec): 00:25:34.423 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:25:34.423 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:25:34.423 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:25:34.423 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 143], 00:25:34.423 | 99.99th=[ 297] 00:25:34.423 bw ( KiB/s): min= 3272, max=19064, per=100.00%, avg=16620.11, stdev=2483.58, samples=35 00:25:34.423 iops : min= 818, max= 4766, avg=4154.97, stdev=620.87, samples=35 00:25:34.423 lat (usec) : 50=0.01%, 100=27.26%, 250=72.73%, 500=0.01%, 750=0.01% 00:25:34.423 lat (msec) : >=2000=0.01% 00:25:34.423 cpu : usr=2.01%, sys=3.15%, ctx=149138, majf=0, minf=144 00:25:34.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.423 issued rwts: total=74379,74752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:34.423 00:25:34.423 Run status group 0 (all jobs): 00:25:34.423 READ: bw=4959KiB/s (5078kB/s), 4959KiB/s-4959KiB/s (5078kB/s-5078kB/s), io=291MiB (305MB), run=60000-60000msec 00:25:34.423 WRITE: bw=4983KiB/s (5103kB/s), 4983KiB/s-4983KiB/s (5103kB/s-5103kB/s), io=292MiB (306MB), run=60000-60000msec 00:25:34.423 00:25:34.423 Disk stats (read/write): 00:25:34.423 nvme0n1: ios=74329/74240, merge=0/0, ticks=7107/6955, in_queue=14062, util=99.61% 00:25:34.423 15:44:10 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:34.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:34.423 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:34.423 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:34.424 nvmf hotplug test: fio successful as expected 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:34.424 rmmod nvme_rdma 00:25:34.424 rmmod nvme_fabrics 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2354134 ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 2354134 ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2354134' 00:25:34.424 killing process with pid 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 2354134 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:34.424 00:25:34.424 real 1m12.544s 00:25:34.424 user 4m31.471s 00:25:34.424 sys 0m7.989s 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.424 ************************************ 00:25:34.424 END TEST nvmf_initiator_timeout 00:25:34.424 ************************************ 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:34.424 ************************************ 00:25:34.424 START TEST nvmf_srq_overwhelm 00:25:34.424 ************************************ 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:34.424 * Looking for test storage... 00:25:34.424 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lcov --version 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:34.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.424 --rc genhtml_branch_coverage=1 00:25:34.424 --rc genhtml_function_coverage=1 00:25:34.424 --rc genhtml_legend=1 00:25:34.424 --rc geninfo_all_blocks=1 00:25:34.424 --rc geninfo_unexecuted_blocks=1 00:25:34.424 00:25:34.424 ' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:34.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.424 --rc genhtml_branch_coverage=1 00:25:34.424 --rc genhtml_function_coverage=1 00:25:34.424 --rc genhtml_legend=1 00:25:34.424 --rc geninfo_all_blocks=1 00:25:34.424 --rc geninfo_unexecuted_blocks=1 00:25:34.424 00:25:34.424 ' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:34.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.424 --rc genhtml_branch_coverage=1 00:25:34.424 --rc genhtml_function_coverage=1 00:25:34.424 --rc genhtml_legend=1 00:25:34.424 --rc geninfo_all_blocks=1 00:25:34.424 --rc geninfo_unexecuted_blocks=1 00:25:34.424 00:25:34.424 ' 00:25:34.424 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:34.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.424 --rc genhtml_branch_coverage=1 00:25:34.424 --rc genhtml_function_coverage=1 00:25:34.424 --rc genhtml_legend=1 00:25:34.425 --rc geninfo_all_blocks=1 00:25:34.425 --rc geninfo_unexecuted_blocks=1 00:25:34.425 00:25:34.425 ' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.425 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.425 15:44:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:40.999 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:40.999 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:40.999 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.999 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:41.000 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:41.000 15:44:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:41.000 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.000 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:41.000 altname enp217s0f0np0 00:25:41.000 altname ens818f0np0 00:25:41.000 inet 192.168.100.8/24 scope global mlx_0_0 00:25:41.000 valid_lft forever preferred_lft forever 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:41.000 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.000 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:41.000 altname enp217s0f1np1 00:25:41.000 altname ens818f1np1 00:25:41.000 inet 192.168.100.9/24 scope global mlx_0_1 00:25:41.000 valid_lft forever preferred_lft forever 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:41.000 192.168.100.9' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:41.000 192.168.100.9' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:41.000 192.168.100.9' 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:25:41.000 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=2368451 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 2368451 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # '[' -z 2368451 ']' 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 [2024-11-03 15:44:18.267498] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:25:41.001 [2024-11-03 15:44:18.267550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.001 [2024-11-03 15:44:18.344574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.001 [2024-11-03 15:44:18.366872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.001 [2024-11-03 15:44:18.366911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.001 [2024-11-03 15:44:18.366921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.001 [2024-11-03 15:44:18.366929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.001 [2024-11-03 15:44:18.366936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.001 [2024-11-03 15:44:18.368471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.001 [2024-11-03 15:44:18.368569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.001 [2024-11-03 15:44:18.368657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.001 [2024-11-03 15:44:18.368659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@866 -- # return 0 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 [2024-11-03 15:44:18.533669] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x242bc50/0x2430100) succeed. 00:25:41.001 [2024-11-03 15:44:18.542694] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x242d290/0x24717a0) succeed. 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 Malloc0 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.001 [2024-11-03 15:44:18.648528] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.001 15:44:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.939 Malloc1 00:25:41.939 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.940 15:44:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme1n1 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:43.317 Malloc2 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.317 15:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme2n1 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.255 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:44.256 Malloc3 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.256 15:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme3n1 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 Malloc4 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.192 15:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme4n1 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:46.129 Malloc5 00:25:46.129 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.130 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:46.389 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.389 15:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme5n1 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:25:47.326 15:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:25:47.326 [global] 00:25:47.326 thread=1 00:25:47.326 invalidate=1 00:25:47.326 rw=read 00:25:47.326 time_based=1 00:25:47.326 runtime=10 00:25:47.326 ioengine=libaio 00:25:47.326 direct=1 00:25:47.326 bs=1048576 00:25:47.326 iodepth=128 00:25:47.326 norandommap=1 00:25:47.326 numjobs=13 00:25:47.326 00:25:47.326 [job0] 00:25:47.326 filename=/dev/nvme0n1 00:25:47.326 [job1] 00:25:47.326 filename=/dev/nvme1n1 00:25:47.326 [job2] 00:25:47.326 filename=/dev/nvme2n1 00:25:47.326 [job3] 00:25:47.326 filename=/dev/nvme3n1 00:25:47.326 [job4] 00:25:47.326 filename=/dev/nvme4n1 00:25:47.326 [job5] 00:25:47.326 filename=/dev/nvme5n1 00:25:47.326 Could not set queue depth (nvme0n1) 00:25:47.326 Could not set queue depth (nvme1n1) 00:25:47.326 Could not set queue depth (nvme2n1) 00:25:47.326 Could not set queue depth (nvme3n1) 00:25:47.326 Could not set queue depth (nvme4n1) 00:25:47.326 Could not set queue depth (nvme5n1) 00:25:47.892 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:25:47.892 ... 00:25:47.892 fio-3.35 00:25:47.892 Starting 78 threads 00:26:02.787 00:26:02.787 job0: (groupid=0, jobs=1): err= 0: pid=2369787: Sun Nov 3 15:44:38 2024 00:26:02.787 read: IOPS=6, BW=6584KiB/s (6742kB/s)(83.0MiB/12909msec) 00:26:02.787 slat (usec): min=1014, max=2106.9k, avg=130317.96, stdev=490713.29 00:26:02.787 clat (msec): min=2091, max=12907, avg=11111.16, stdev=2962.88 00:26:02.787 lat (msec): min=4191, max=12908, avg=11241.48, stdev=2794.43 00:26:02.787 clat percentiles (msec): 00:26:02.787 | 1.00th=[ 2089], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:26:02.787 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:26:02.787 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:26:02.787 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:26:02.787 | 99.99th=[12953] 00:26:02.787 lat (msec) : >=2000=100.00% 00:26:02.787 cpu : usr=0.00%, sys=0.77%, ctx=112, majf=0, minf=21249 00:26:02.787 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:26:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.787 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.788 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369788: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=1, BW=1767KiB/s (1809kB/s)(22.0MiB/12750msec) 00:26:02.788 slat (msec): min=7, max=2116, avg=484.15, stdev=879.22 00:26:02.788 clat (msec): min=2098, max=12691, avg=7734.01, stdev=3040.38 00:26:02.788 lat (msec): min=4215, max=12749, avg=8218.16, stdev=2946.86 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:26:02.788 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:26:02.788 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12684], 95.00th=[12684], 00:26:02.788 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.788 | 99.99th=[12684] 00:26:02.788 lat (msec) : >=2000=100.00% 00:26:02.788 cpu : usr=0.00%, sys=0.13%, ctx=51, majf=0, minf=5633 00:26:02.788 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.788 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369789: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=46, BW=46.3MiB/s (48.6MB/s)(499MiB/10767msec) 00:26:02.788 slat (usec): min=712, max=2100.8k, avg=21457.85, stdev=160299.70 00:26:02.788 clat (msec): min=54, max=7388, avg=2564.18, stdev=2466.07 00:26:02.788 lat (msec): min=1014, max=7402, avg=2585.64, stdev=2469.24 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 1011], 5.00th=[ 1028], 10.00th=[ 1036], 20.00th=[ 1053], 00:26:02.788 | 30.00th=[ 1083], 40.00th=[ 1099], 50.00th=[ 1116], 60.00th=[ 1116], 00:26:02.788 | 70.00th=[ 1301], 80.00th=[ 6611], 90.00th=[ 7013], 95.00th=[ 7215], 00:26:02.788 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7416], 99.95th=[ 7416], 00:26:02.788 | 99.99th=[ 7416] 00:26:02.788 bw ( KiB/s): min= 2048, max=126976, per=2.43%, avg=75952.90, stdev=52436.31, samples=10 00:26:02.788 iops : min= 2, max= 124, avg=74.00, stdev=51.26, samples=10 00:26:02.788 lat (msec) : 100=0.20%, 2000=71.54%, >=2000=28.26% 00:26:02.788 cpu : usr=0.03%, sys=2.01%, ctx=895, majf=0, minf=32769 00:26:02.788 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:02.788 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369790: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=1, BW=1364KiB/s (1397kB/s)(17.0MiB/12764msec) 00:26:02.788 slat (msec): min=5, max=2114, avg=627.80, stdev=967.35 00:26:02.788 clat (msec): min=2091, max=12684, avg=8356.73, stdev=3702.94 00:26:02.788 lat (msec): min=4205, max=12763, avg=8984.53, stdev=3471.79 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4212], 20.00th=[ 4245], 00:26:02.788 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:26:02.788 | 70.00th=[10671], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:26:02.788 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.788 | 99.99th=[12684] 00:26:02.788 lat (msec) : >=2000=100.00% 00:26:02.788 cpu : usr=0.01%, sys=0.09%, ctx=48, majf=0, minf=4353 00:26:02.788 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.788 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369791: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=86, BW=86.9MiB/s (91.1MB/s)(1116MiB/12844msec) 00:26:02.788 slat (usec): min=51, max=2112.8k, avg=9608.30, stdev=122836.19 00:26:02.788 clat (msec): min=212, max=10700, avg=1069.41, stdev=2069.76 00:26:02.788 lat (msec): min=214, max=10715, avg=1079.02, stdev=2085.81 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 215], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 218], 00:26:02.788 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:26:02.788 | 70.00th=[ 257], 80.00th=[ 326], 90.00th=[ 6477], 95.00th=[ 6544], 00:26:02.788 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[10671], 99.95th=[10671], 00:26:02.788 | 99.99th=[10671] 00:26:02.788 bw ( KiB/s): min= 2043, max=581632, per=10.78%, avg=337577.83, stdev=263464.89, samples=6 00:26:02.788 iops : min= 1, max= 568, avg=329.50, stdev=257.54, samples=6 00:26:02.788 lat (msec) : 250=69.98%, 500=15.23%, >=2000=14.78% 00:26:02.788 cpu : usr=0.03%, sys=1.32%, ctx=1014, majf=0, minf=32769 00:26:02.788 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.788 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369792: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=1, BW=1205KiB/s (1234kB/s)(15.0MiB/12751msec) 00:26:02.788 slat (msec): min=10, max=2139, avg=709.88, stdev=1005.34 00:26:02.788 clat (msec): min=2101, max=12737, avg=8917.33, stdev=3591.90 00:26:02.788 lat (msec): min=4214, max=12750, avg=9627.21, stdev=3176.98 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 4245], 00:26:02.788 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:26:02.788 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:26:02.788 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.788 | 99.99th=[12684] 00:26:02.788 lat (msec) : >=2000=100.00% 00:26:02.788 cpu : usr=0.02%, sys=0.06%, ctx=53, majf=0, minf=3841 00:26:02.788 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369793: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=2, BW=2638KiB/s (2701kB/s)(33.0MiB/12810msec) 00:26:02.788 slat (msec): min=2, max=2120, avg=324.34, stdev=746.96 00:26:02.788 clat (msec): min=2105, max=12795, avg=10116.76, stdev=3094.31 00:26:02.788 lat (msec): min=4205, max=12809, avg=10441.10, stdev=2772.59 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:26:02.788 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12684], 00:26:02.788 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:26:02.788 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.788 | 99.99th=[12818] 00:26:02.788 lat (msec) : >=2000=100.00% 00:26:02.788 cpu : usr=0.01%, sys=0.26%, ctx=69, majf=0, minf=8449 00:26:02.788 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.788 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369794: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=47, BW=47.7MiB/s (50.0MB/s)(513MiB/10748msec) 00:26:02.788 slat (usec): min=712, max=2100.8k, avg=20827.45, stdev=158177.24 00:26:02.788 clat (msec): min=57, max=10599, avg=2519.04, stdev=2473.29 00:26:02.788 lat (msec): min=976, max=10607, avg=2539.87, stdev=2476.71 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 969], 5.00th=[ 986], 10.00th=[ 995], 20.00th=[ 1003], 00:26:02.788 | 30.00th=[ 1045], 40.00th=[ 1053], 50.00th=[ 1083], 60.00th=[ 1167], 00:26:02.788 | 70.00th=[ 1301], 80.00th=[ 6611], 90.00th=[ 7013], 95.00th=[ 7215], 00:26:02.788 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[10537], 99.95th=[10537], 00:26:02.788 | 99.99th=[10537] 00:26:02.788 bw ( KiB/s): min=12288, max=132854, per=2.80%, avg=87550.22, stdev=50010.52, samples=9 00:26:02.788 iops : min= 12, max= 129, avg=85.33, stdev=48.67, samples=9 00:26:02.788 lat (msec) : 100=0.19%, 1000=17.35%, 2000=54.97%, >=2000=27.49% 00:26:02.788 cpu : usr=0.08%, sys=2.02%, ctx=896, majf=0, minf=32769 00:26:02.788 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:02.788 issued rwts: total=513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.788 job0: (groupid=0, jobs=1): err= 0: pid=2369795: Sun Nov 3 15:44:38 2024 00:26:02.788 read: IOPS=4, BW=4214KiB/s (4315kB/s)(53.0MiB/12878msec) 00:26:02.788 slat (usec): min=668, max=2115.1k, avg=203685.96, stdev=610143.18 00:26:02.788 clat (msec): min=2081, max=12873, avg=11599.72, stdev=2579.82 00:26:02.788 lat (msec): min=4196, max=12877, avg=11803.41, stdev=2214.13 00:26:02.788 clat percentiles (msec): 00:26:02.788 | 1.00th=[ 2089], 5.00th=[ 4245], 10.00th=[ 8490], 20.00th=[10671], 00:26:02.788 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:26:02.788 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.788 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.788 | 99.99th=[12818] 00:26:02.788 lat (msec) : >=2000=100.00% 00:26:02.788 cpu : usr=0.00%, sys=0.50%, ctx=94, majf=0, minf=13569 00:26:02.788 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:26:02.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.788 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job0: (groupid=0, jobs=1): err= 0: pid=2369796: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=4, BW=4124KiB/s (4223kB/s)(52.0MiB/12912msec) 00:26:02.789 slat (usec): min=553, max=2107.9k, avg=207943.71, stdev=614261.40 00:26:02.789 clat (msec): min=2098, max=12909, avg=11424.81, stdev=2788.50 00:26:02.789 lat (msec): min=4205, max=12911, avg=11632.75, stdev=2463.61 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[10671], 00:26:02.789 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:26:02.789 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:26:02.789 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:26:02.789 | 99.99th=[12953] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.00%, sys=0.43%, ctx=97, majf=0, minf=13313 00:26:02.789 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job0: (groupid=0, jobs=1): err= 0: pid=2369797: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=4, BW=4568KiB/s (4677kB/s)(57.0MiB/12778msec) 00:26:02.789 slat (usec): min=890, max=2097.4k, avg=187167.22, stdev=583448.10 00:26:02.789 clat (msec): min=2108, max=12757, avg=9214.88, stdev=3314.12 00:26:02.789 lat (msec): min=4206, max=12777, avg=9402.05, stdev=3205.10 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:26:02.789 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:26:02.789 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.01%, sys=0.45%, ctx=46, majf=0, minf=14593 00:26:02.789 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job0: (groupid=0, jobs=1): err= 0: pid=2369798: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=7, BW=7582KiB/s (7764kB/s)(80.0MiB/10805msec) 00:26:02.789 slat (usec): min=570, max=2099.0k, avg=134377.16, stdev=497618.53 00:26:02.789 clat (msec): min=54, max=10802, avg=7499.52, stdev=3382.76 00:26:02.789 lat (msec): min=2110, max=10804, avg=7633.90, stdev=3295.66 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 55], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:26:02.789 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10537], 00:26:02.789 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:26:02.789 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.789 | 99.99th=[10805] 00:26:02.789 lat (msec) : 100=1.25%, >=2000=98.75% 00:26:02.789 cpu : usr=0.02%, sys=0.77%, ctx=88, majf=0, minf=20481 00:26:02.789 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.789 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job0: (groupid=0, jobs=1): err= 0: pid=2369799: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=4, BW=4566KiB/s (4675kB/s)(57.0MiB/12784msec) 00:26:02.789 slat (usec): min=925, max=2087.8k, avg=187552.40, stdev=585818.55 00:26:02.789 clat (msec): min=2092, max=12782, avg=8975.08, stdev=3372.59 00:26:02.789 lat (msec): min=4175, max=12783, avg=9162.63, stdev=3278.98 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.789 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:26:02.789 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.00%, sys=0.48%, ctx=61, majf=0, minf=14593 00:26:02.789 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job1: (groupid=0, jobs=1): err= 0: pid=2369800: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=3, BW=4006KiB/s (4102kB/s)(50.0MiB/12781msec) 00:26:02.789 slat (usec): min=896, max=2087.3k, avg=213643.10, stdev=618006.70 00:26:02.789 clat (msec): min=2097, max=12777, avg=10191.35, stdev=3174.41 00:26:02.789 lat (msec): min=4177, max=12780, avg=10404.99, stdev=2971.58 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:26:02.789 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:26:02.789 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.00%, sys=0.41%, ctx=72, majf=0, minf=12801 00:26:02.789 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job1: (groupid=0, jobs=1): err= 0: pid=2369801: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=2, BW=2872KiB/s (2941kB/s)(36.0MiB/12837msec) 00:26:02.789 slat (usec): min=598, max=2118.0k, avg=298391.00, stdev=729684.18 00:26:02.789 clat (msec): min=2094, max=12834, avg=10927.99, stdev=3238.09 00:26:02.789 lat (msec): min=4173, max=12836, avg=11226.38, stdev=2875.45 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 8490], 00:26:02.789 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:26:02.789 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.00%, sys=0.26%, ctx=72, majf=0, minf=9217 00:26:02.789 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job1: (groupid=0, jobs=1): err= 0: pid=2369802: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=3, BW=4009KiB/s (4105kB/s)(50.0MiB/12772msec) 00:26:02.789 slat (usec): min=630, max=2082.5k, avg=213365.85, stdev=619917.36 00:26:02.789 clat (msec): min=2102, max=12765, avg=8925.67, stdev=3308.48 00:26:02.789 lat (msec): min=4185, max=12771, avg=9139.04, stdev=3201.78 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.789 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:26:02.789 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.01%, sys=0.38%, ctx=50, majf=0, minf=12801 00:26:02.789 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job1: (groupid=0, jobs=1): err= 0: pid=2369803: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=2, BW=2718KiB/s (2783kB/s)(34.0MiB/12809msec) 00:26:02.789 slat (usec): min=960, max=2121.0k, avg=314688.53, stdev=745525.72 00:26:02.789 clat (msec): min=2108, max=12806, avg=11139.36, stdev=3018.02 00:26:02.789 lat (msec): min=4196, max=12808, avg=11454.05, stdev=2572.82 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8557], 00:26:02.789 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:26:02.789 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.789 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.789 | 99.99th=[12818] 00:26:02.789 lat (msec) : >=2000=100.00% 00:26:02.789 cpu : usr=0.00%, sys=0.25%, ctx=71, majf=0, minf=8705 00:26:02.789 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:26:02.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.789 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.789 job1: (groupid=0, jobs=1): err= 0: pid=2369804: Sun Nov 3 15:44:38 2024 00:26:02.789 read: IOPS=242, BW=243MiB/s (255MB/s)(3110MiB/12804msec) 00:26:02.789 slat (usec): min=43, max=2099.4k, avg=3437.71, stdev=64981.45 00:26:02.789 clat (msec): min=118, max=8705, avg=512.54, stdev=1655.34 00:26:02.789 lat (msec): min=120, max=8706, avg=515.98, stdev=1661.35 00:26:02.789 clat percentiles (msec): 00:26:02.789 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 122], 20.00th=[ 123], 00:26:02.789 | 30.00th=[ 124], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 125], 00:26:02.789 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 342], 00:26:02.790 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:26:02.790 | 99.99th=[ 8658] 00:26:02.790 bw ( KiB/s): min= 2043, max=1060864, per=16.25%, avg=508834.83, stdev=415435.09, samples=12 00:26:02.790 iops : min= 1, max= 1036, avg=496.67, stdev=405.69, samples=12 00:26:02.790 lat (msec) : 250=85.82%, 500=9.84%, >=2000=4.34% 00:26:02.790 cpu : usr=0.06%, sys=2.27%, ctx=2897, majf=0, minf=32769 00:26:02.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.790 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369805: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=7, BW=7587KiB/s (7769kB/s)(79.0MiB/10663msec) 00:26:02.790 slat (usec): min=530, max=2095.2k, avg=134177.30, stdev=497974.56 00:26:02.790 clat (msec): min=61, max=10661, avg=6940.30, stdev=3355.86 00:26:02.790 lat (msec): min=2109, max=10662, avg=7074.48, stdev=3288.55 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 62], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2198], 00:26:02.790 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:26:02.790 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:26:02.790 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:02.790 | 99.99th=[10671] 00:26:02.790 lat (msec) : 100=1.27%, >=2000=98.73% 00:26:02.790 cpu : usr=0.00%, sys=0.68%, ctx=50, majf=0, minf=20225 00:26:02.790 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.790 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369806: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=2, BW=2162KiB/s (2214kB/s)(27.0MiB/12786msec) 00:26:02.790 slat (usec): min=1102, max=2110.7k, avg=395712.87, stdev=808289.66 00:26:02.790 clat (msec): min=2101, max=12740, avg=10061.11, stdev=3078.93 00:26:02.790 lat (msec): min=4204, max=12785, avg=10456.83, stdev=2676.94 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:26:02.790 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12550], 00:26:02.790 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:26:02.790 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.790 | 99.99th=[12684] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.00%, sys=0.18%, ctx=68, majf=0, minf=6913 00:26:02.790 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.790 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369807: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=34, BW=35.0MiB/s (36.7MB/s)(448MiB/12810msec) 00:26:02.790 slat (usec): min=74, max=2095.9k, avg=23889.33, stdev=195234.15 00:26:02.790 clat (msec): min=516, max=11071, avg=3467.02, stdev=4258.03 00:26:02.790 lat (msec): min=518, max=11077, avg=3490.91, stdev=4270.41 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 518], 5.00th=[ 523], 10.00th=[ 523], 20.00th=[ 527], 00:26:02.790 | 30.00th=[ 531], 40.00th=[ 535], 50.00th=[ 550], 60.00th=[ 726], 00:26:02.790 | 70.00th=[ 4799], 80.00th=[10671], 90.00th=[10939], 95.00th=[10939], 00:26:02.790 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:26:02.790 | 99.99th=[11073] 00:26:02.790 bw ( KiB/s): min= 2043, max=247808, per=2.62%, avg=82103.75, stdev=103174.23, samples=8 00:26:02.790 iops : min= 1, max= 242, avg=79.88, stdev=100.80, samples=8 00:26:02.790 lat (msec) : 750=60.94%, 1000=3.12%, >=2000=35.94% 00:26:02.790 cpu : usr=0.03%, sys=1.30%, ctx=443, majf=0, minf=32769 00:26:02.790 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=85.9% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:02.790 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369808: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=6, BW=6263KiB/s (6413kB/s)(78.0MiB/12753msec) 00:26:02.790 slat (usec): min=953, max=2118.7k, avg=136505.95, stdev=485597.40 00:26:02.790 clat (msec): min=2105, max=12731, avg=11091.93, stdev=2700.01 00:26:02.790 lat (msec): min=4177, max=12752, avg=11228.44, stdev=2501.63 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[10671], 00:26:02.790 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12416], 00:26:02.790 | 70.00th=[12416], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:26:02.790 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.790 | 99.99th=[12684] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.02%, sys=0.54%, ctx=158, majf=0, minf=19969 00:26:02.790 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.790 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369809: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=1, BW=1687KiB/s (1727kB/s)(21.0MiB/12748msec) 00:26:02.790 slat (usec): min=458, max=2107.8k, avg=506668.56, stdev=893551.68 00:26:02.790 clat (msec): min=2107, max=12682, avg=7981.49, stdev=3405.67 00:26:02.790 lat (msec): min=4215, max=12747, avg=8488.16, stdev=3277.19 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:26:02.790 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:26:02.790 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12684], 95.00th=[12684], 00:26:02.790 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.790 | 99.99th=[12684] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.00%, sys=0.11%, ctx=55, majf=0, minf=5377 00:26:02.790 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.790 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369810: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=3, BW=3334KiB/s (3414kB/s)(42.0MiB/12900msec) 00:26:02.790 slat (usec): min=1059, max=2111.4k, avg=257089.74, stdev=681153.65 00:26:02.790 clat (msec): min=2101, max=12895, avg=11250.18, stdev=3123.65 00:26:02.790 lat (msec): min=4204, max=12899, avg=11507.27, stdev=2777.47 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8557], 00:26:02.790 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:26:02.790 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:26:02.790 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:26:02.790 | 99.99th=[12953] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.01%, sys=0.40%, ctx=87, majf=0, minf=10753 00:26:02.790 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.790 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369811: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=2, BW=2814KiB/s (2881kB/s)(35.0MiB/12737msec) 00:26:02.790 slat (usec): min=954, max=2093.4k, avg=303530.74, stdev=724458.94 00:26:02.790 clat (msec): min=2112, max=12700, avg=7569.47, stdev=3368.44 00:26:02.790 lat (msec): min=4195, max=12736, avg=7873.00, stdev=3340.78 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:26:02.790 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6409], 00:26:02.790 | 70.00th=[ 8557], 80.00th=[12550], 90.00th=[12684], 95.00th=[12684], 00:26:02.790 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.790 | 99.99th=[12684] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.00%, sys=0.27%, ctx=51, majf=0, minf=8961 00:26:02.790 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.790 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.790 job1: (groupid=0, jobs=1): err= 0: pid=2369812: Sun Nov 3 15:44:38 2024 00:26:02.790 read: IOPS=1, BW=1437KiB/s (1472kB/s)(18.0MiB/12824msec) 00:26:02.790 slat (usec): min=1315, max=2137.0k, avg=595680.88, stdev=946758.41 00:26:02.790 clat (msec): min=2101, max=12822, avg=9572.12, stdev=3305.36 00:26:02.790 lat (msec): min=4194, max=12823, avg=10167.80, stdev=2808.69 00:26:02.790 clat percentiles (msec): 00:26:02.790 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 6342], 00:26:02.790 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:26:02.790 | 70.00th=[12550], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.790 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.790 | 99.99th=[12818] 00:26:02.790 lat (msec) : >=2000=100.00% 00:26:02.790 cpu : usr=0.00%, sys=0.14%, ctx=58, majf=0, minf=4609 00:26:02.790 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:26:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.790 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.790 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369813: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=2, BW=2966KiB/s (3037kB/s)(37.0MiB/12774msec) 00:26:02.791 slat (usec): min=1054, max=2078.5k, avg=288112.52, stdev=705183.16 00:26:02.791 clat (msec): min=2113, max=12771, avg=9417.42, stdev=3417.95 00:26:02.791 lat (msec): min=4191, max=12773, avg=9705.54, stdev=3229.25 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.791 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:26:02.791 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.791 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.791 | 99.99th=[12818] 00:26:02.791 lat (msec) : >=2000=100.00% 00:26:02.791 cpu : usr=0.00%, sys=0.36%, ctx=65, majf=0, minf=9473 00:26:02.791 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.791 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369814: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=5, BW=5771KiB/s (5910kB/s)(72.0MiB/12775msec) 00:26:02.791 slat (usec): min=603, max=2082.1k, avg=148137.70, stdev=521791.44 00:26:02.791 clat (msec): min=2108, max=12771, avg=9097.29, stdev=3314.14 00:26:02.791 lat (msec): min=4166, max=12774, avg=9245.42, stdev=3234.77 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.791 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:26:02.791 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:26:02.791 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.791 | 99.99th=[12818] 00:26:02.791 lat (msec) : >=2000=100.00% 00:26:02.791 cpu : usr=0.00%, sys=0.62%, ctx=59, majf=0, minf=18433 00:26:02.791 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.791 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369815: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=11, BW=11.5MiB/s (12.1MB/s)(124MiB/10779msec) 00:26:02.791 slat (usec): min=1057, max=2189.0k, avg=86395.22, stdev=372547.48 00:26:02.791 clat (msec): min=65, max=10775, avg=9373.20, stdev=1635.74 00:26:02.791 lat (msec): min=2145, max=10778, avg=9459.59, stdev=1407.04 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2140], 5.00th=[ 6477], 10.00th=[ 8658], 20.00th=[ 8926], 00:26:02.791 | 30.00th=[ 9060], 40.00th=[ 9329], 50.00th=[ 9597], 60.00th=[ 9866], 00:26:02.791 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10805], 95.00th=[10805], 00:26:02.791 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.791 | 99.99th=[10805] 00:26:02.791 lat (msec) : 100=0.81%, >=2000=99.19% 00:26:02.791 cpu : usr=0.01%, sys=1.00%, ctx=493, majf=0, minf=31745 00:26:02.791 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.5%, 16=12.9%, 32=25.8%, >=64=49.2% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.791 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369816: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(151MiB/12774msec) 00:26:02.791 slat (usec): min=576, max=2133.2k, avg=70660.21, stdev=335294.19 00:26:02.791 clat (msec): min=1600, max=12742, avg=9818.22, stdev=3313.87 00:26:02.791 lat (msec): min=1636, max=12747, avg=9888.88, stdev=3250.12 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 1603], 5.00th=[ 1770], 10.00th=[ 2106], 20.00th=[ 8490], 00:26:02.791 | 30.00th=[10805], 40.00th=[11073], 50.00th=[11208], 60.00th=[11342], 00:26:02.791 | 70.00th=[11476], 80.00th=[11745], 90.00th=[11879], 95.00th=[12013], 00:26:02.791 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.791 | 99.99th=[12684] 00:26:02.791 bw ( KiB/s): min= 2043, max=18432, per=0.26%, avg=8184.83, stdev=6073.87, samples=6 00:26:02.791 iops : min= 1, max= 18, avg= 7.50, stdev= 6.09, samples=6 00:26:02.791 lat (msec) : 2000=9.93%, >=2000=90.07% 00:26:02.791 cpu : usr=0.00%, sys=0.74%, ctx=477, majf=0, minf=32769 00:26:02.791 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.6%, 32=21.2%, >=64=58.3% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=96.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.0% 00:26:02.791 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369817: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=3, BW=3735KiB/s (3824kB/s)(47.0MiB/12887msec) 00:26:02.791 slat (usec): min=1089, max=2117.5k, avg=229430.96, stdev=639607.41 00:26:02.791 clat (msec): min=2102, max=12884, avg=10658.59, stdev=3231.12 00:26:02.791 lat (msec): min=4180, max=12886, avg=10888.02, stdev=2983.77 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:26:02.791 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:26:02.791 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.791 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.791 | 99.99th=[12818] 00:26:02.791 lat (msec) : >=2000=100.00% 00:26:02.791 cpu : usr=0.00%, sys=0.42%, ctx=101, majf=0, minf=12033 00:26:02.791 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.791 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369818: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=2, BW=2093KiB/s (2143kB/s)(26.0MiB/12721msec) 00:26:02.791 slat (msec): min=7, max=2073, avg=408.05, stdev=814.61 00:26:02.791 clat (msec): min=2111, max=12657, avg=7592.59, stdev=3128.25 00:26:02.791 lat (msec): min=4183, max=12720, avg=8000.64, stdev=3076.15 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:26:02.791 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:26:02.791 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12684], 95.00th=[12684], 00:26:02.791 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.791 | 99.99th=[12684] 00:26:02.791 lat (msec) : >=2000=100.00% 00:26:02.791 cpu : usr=0.02%, sys=0.22%, ctx=54, majf=0, minf=6657 00:26:02.791 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.791 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369819: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=13, BW=13.6MiB/s (14.2MB/s)(147MiB/10846msec) 00:26:02.791 slat (usec): min=977, max=2152.9k, avg=73166.63, stdev=338932.88 00:26:02.791 clat (msec): min=89, max=10794, avg=8641.81, stdev=2592.21 00:26:02.791 lat (msec): min=2005, max=10795, avg=8714.98, stdev=2498.54 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2005], 5.00th=[ 2140], 10.00th=[ 2232], 20.00th=[ 8658], 00:26:02.791 | 30.00th=[ 8926], 40.00th=[ 9194], 50.00th=[ 9463], 60.00th=[ 9731], 00:26:02.791 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10671], 95.00th=[10671], 00:26:02.791 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.791 | 99.99th=[10805] 00:26:02.791 bw ( KiB/s): min= 2043, max=18432, per=0.25%, avg=7775.40, stdev=6699.57, samples=5 00:26:02.791 iops : min= 1, max= 18, avg= 7.00, stdev= 6.82, samples=5 00:26:02.791 lat (msec) : 100=0.68%, >=2000=99.32% 00:26:02.791 cpu : usr=0.01%, sys=1.10%, ctx=503, majf=0, minf=32769 00:26:02.791 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.9%, 32=21.8%, >=64=57.1% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=95.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.8% 00:26:02.791 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369820: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=3, BW=3972KiB/s (4067kB/s)(50.0MiB/12891msec) 00:26:02.791 slat (usec): min=1128, max=2109.6k, avg=215690.37, stdev=623484.37 00:26:02.791 clat (msec): min=2105, max=12889, avg=10872.26, stdev=2954.31 00:26:02.791 lat (msec): min=4215, max=12890, avg=11087.95, stdev=2682.37 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:26:02.791 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:26:02.791 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.791 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:26:02.791 | 99.99th=[12953] 00:26:02.791 lat (msec) : >=2000=100.00% 00:26:02.791 cpu : usr=0.00%, sys=0.43%, ctx=107, majf=0, minf=12801 00:26:02.791 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:02.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.791 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.791 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.791 job2: (groupid=0, jobs=1): err= 0: pid=2369821: Sun Nov 3 15:44:38 2024 00:26:02.791 read: IOPS=274, BW=274MiB/s (288MB/s)(2753MiB/10034msec) 00:26:02.791 slat (usec): min=59, max=2066.5k, avg=3628.74, stdev=39565.47 00:26:02.791 clat (msec): min=32, max=2488, avg=445.64, stdev=470.55 00:26:02.791 lat (msec): min=34, max=2489, avg=449.27, stdev=472.32 00:26:02.791 clat percentiles (msec): 00:26:02.791 | 1.00th=[ 88], 5.00th=[ 230], 10.00th=[ 232], 20.00th=[ 234], 00:26:02.791 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 259], 60.00th=[ 305], 00:26:02.791 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 592], 95.00th=[ 709], 00:26:02.792 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:26:02.792 | 99.99th=[ 2500] 00:26:02.792 bw ( KiB/s): min=118546, max=563200, per=11.17%, avg=349825.07, stdev=149565.00, samples=14 00:26:02.792 iops : min= 115, max= 550, avg=341.50, stdev=146.20, samples=14 00:26:02.792 lat (msec) : 50=0.29%, 100=0.94%, 250=43.41%, 500=18.63%, 750=32.04% 00:26:02.792 lat (msec) : 1000=0.07%, >=2000=4.61% 00:26:02.792 cpu : usr=0.13%, sys=4.03%, ctx=2461, majf=0, minf=32769 00:26:02.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.792 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job2: (groupid=0, jobs=1): err= 0: pid=2369822: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=10, BW=10.3MiB/s (10.8MB/s)(110MiB/10693msec) 00:26:02.792 slat (usec): min=999, max=2174.8k, avg=96612.20, stdev=389731.46 00:26:02.792 clat (msec): min=64, max=10687, avg=8948.90, stdev=2102.93 00:26:02.792 lat (msec): min=2112, max=10692, avg=9045.51, stdev=1927.88 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 2106], 5.00th=[ 2198], 10.00th=[ 6409], 20.00th=[ 8792], 00:26:02.792 | 30.00th=[ 9060], 40.00th=[ 9194], 50.00th=[ 9463], 60.00th=[ 9731], 00:26:02.792 | 70.00th=[10000], 80.00th=[10268], 90.00th=[10402], 95.00th=[10537], 00:26:02.792 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:02.792 | 99.99th=[10671] 00:26:02.792 lat (msec) : 100=0.91%, >=2000=99.09% 00:26:02.792 cpu : usr=0.00%, sys=0.78%, ctx=470, majf=0, minf=28161 00:26:02.792 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.792 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job2: (groupid=0, jobs=1): err= 0: pid=2369823: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=5, BW=5980KiB/s (6124kB/s)(75.0MiB/12842msec) 00:26:02.792 slat (usec): min=881, max=2074.6k, avg=143031.58, stdev=511172.57 00:26:02.792 clat (msec): min=2114, max=12838, avg=9587.02, stdev=3351.92 00:26:02.792 lat (msec): min=4188, max=12841, avg=9730.05, stdev=3256.26 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:26:02.792 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:26:02.792 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:26:02.792 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.792 | 99.99th=[12818] 00:26:02.792 lat (msec) : >=2000=100.00% 00:26:02.792 cpu : usr=0.02%, sys=0.59%, ctx=86, majf=0, minf=19201 00:26:02.792 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.792 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job2: (groupid=0, jobs=1): err= 0: pid=2369824: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=15, BW=15.7MiB/s (16.4MB/s)(200MiB/12766msec) 00:26:02.792 slat (usec): min=44, max=2118.2k, avg=53310.11, stdev=299883.27 00:26:02.792 clat (msec): min=632, max=12723, avg=7789.40, stdev=5056.11 00:26:02.792 lat (msec): min=636, max=12763, avg=7842.71, stdev=5046.80 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 651], 00:26:02.792 | 30.00th=[ 2106], 40.00th=[ 7953], 50.00th=[11745], 60.00th=[11745], 00:26:02.792 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:26:02.792 | 99.00th=[12147], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.792 | 99.99th=[12684] 00:26:02.792 bw ( KiB/s): min= 2043, max=69632, per=0.68%, avg=21354.00, stdev=26278.74, samples=7 00:26:02.792 iops : min= 1, max= 68, avg=20.57, stdev=25.86, samples=7 00:26:02.792 lat (msec) : 750=28.00%, 1000=0.50%, 2000=1.00%, >=2000=70.50% 00:26:02.792 cpu : usr=0.00%, sys=0.88%, ctx=196, majf=0, minf=32769 00:26:02.792 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:26:02.792 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job2: (groupid=0, jobs=1): err= 0: pid=2369825: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=5, BW=5230KiB/s (5356kB/s)(65.0MiB/12726msec) 00:26:02.792 slat (usec): min=530, max=2068.7k, avg=163287.61, stdev=543100.02 00:26:02.792 clat (msec): min=2111, max=12723, avg=9172.44, stdev=3222.88 00:26:02.792 lat (msec): min=4177, max=12725, avg=9335.73, stdev=3127.00 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.792 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:26:02.792 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:26:02.792 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:26:02.792 | 99.99th=[12684] 00:26:02.792 lat (msec) : >=2000=100.00% 00:26:02.792 cpu : usr=0.00%, sys=0.46%, ctx=54, majf=0, minf=16641 00:26:02.792 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.792 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job3: (groupid=0, jobs=1): err= 0: pid=2369826: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=11, BW=11.6MiB/s (12.1MB/s)(125MiB/10805msec) 00:26:02.792 slat (usec): min=798, max=2091.4k, avg=85718.45, stdev=363636.35 00:26:02.792 clat (msec): min=88, max=10798, avg=8455.97, stdev=2686.44 00:26:02.792 lat (msec): min=2168, max=10804, avg=8541.68, stdev=2586.39 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 2165], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:26:02.792 | 30.00th=[ 8926], 40.00th=[ 9194], 50.00th=[ 9463], 60.00th=[ 9731], 00:26:02.792 | 70.00th=[10000], 80.00th=[10268], 90.00th=[10671], 95.00th=[10671], 00:26:02.792 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.792 | 99.99th=[10805] 00:26:02.792 lat (msec) : 100=0.80%, >=2000=99.20% 00:26:02.792 cpu : usr=0.00%, sys=1.11%, ctx=518, majf=0, minf=32001 00:26:02.792 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.792 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job3: (groupid=0, jobs=1): err= 0: pid=2369827: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=80, BW=80.1MiB/s (84.0MB/s)(1026MiB/12810msec) 00:26:02.792 slat (usec): min=41, max=2100.8k, avg=10429.55, stdev=111782.90 00:26:02.792 clat (msec): min=383, max=8876, avg=1514.12, stdev=2582.70 00:26:02.792 lat (msec): min=387, max=8880, avg=1524.54, stdev=2591.56 00:26:02.792 clat percentiles (msec): 00:26:02.792 | 1.00th=[ 388], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 393], 00:26:02.792 | 30.00th=[ 397], 40.00th=[ 401], 50.00th=[ 405], 60.00th=[ 634], 00:26:02.792 | 70.00th=[ 684], 80.00th=[ 726], 90.00th=[ 8557], 95.00th=[ 8658], 00:26:02.792 | 99.00th=[ 8792], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:02.792 | 99.99th=[ 8926] 00:26:02.792 bw ( KiB/s): min= 2043, max=331776, per=5.34%, avg=167279.09, stdev=133825.08, samples=11 00:26:02.792 iops : min= 1, max= 324, avg=163.09, stdev=130.87, samples=11 00:26:02.792 lat (msec) : 500=53.51%, 750=28.85%, 1000=2.92%, >=2000=14.72% 00:26:02.792 cpu : usr=0.05%, sys=1.38%, ctx=1130, majf=0, minf=32769 00:26:02.792 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:26:02.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.792 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.792 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.792 job3: (groupid=0, jobs=1): err= 0: pid=2369828: Sun Nov 3 15:44:38 2024 00:26:02.792 read: IOPS=82, BW=82.7MiB/s (86.7MB/s)(888MiB/10741msec) 00:26:02.792 slat (usec): min=46, max=2114.8k, avg=11987.39, stdev=121411.13 00:26:02.792 clat (msec): min=92, max=6908, avg=1445.41, stdev=2151.33 00:26:02.793 lat (msec): min=389, max=6908, avg=1457.40, stdev=2157.22 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 388], 5.00th=[ 393], 10.00th=[ 393], 20.00th=[ 397], 00:26:02.793 | 30.00th=[ 397], 40.00th=[ 401], 50.00th=[ 401], 60.00th=[ 409], 00:26:02.793 | 70.00th=[ 969], 80.00th=[ 1150], 90.00th=[ 6611], 95.00th=[ 6745], 00:26:02.793 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:26:02.793 | 99.99th=[ 6879] 00:26:02.793 bw ( KiB/s): min= 2043, max=331113, per=4.97%, avg=155556.50, stdev=141398.74, samples=10 00:26:02.793 iops : min= 1, max= 323, avg=151.60, stdev=138.30, samples=10 00:26:02.793 lat (msec) : 100=0.11%, 500=60.92%, 750=5.74%, 1000=3.49%, 2000=14.41% 00:26:02.793 lat (msec) : >=2000=15.32% 00:26:02.793 cpu : usr=0.05%, sys=1.59%, ctx=1148, majf=0, minf=32769 00:26:02.793 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.793 issued rwts: total=888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369829: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=1, BW=1681KiB/s (1721kB/s)(21.0MiB/12795msec) 00:26:02.793 slat (msec): min=4, max=2117, avg=509.02, stdev=883.41 00:26:02.793 clat (msec): min=2104, max=12780, avg=9293.04, stdev=3641.93 00:26:02.793 lat (msec): min=4174, max=12794, avg=9802.06, stdev=3319.82 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:02.793 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:26:02.793 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:26:02.793 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:26:02.793 | 99.99th=[12818] 00:26:02.793 lat (msec) : >=2000=100.00% 00:26:02.793 cpu : usr=0.00%, sys=0.13%, ctx=82, majf=0, minf=5377 00:26:02.793 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.793 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369830: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=37, BW=37.6MiB/s (39.4MB/s)(484MiB/12866msec) 00:26:02.793 slat (usec): min=43, max=2084.0k, avg=22215.28, stdev=183518.60 00:26:02.793 clat (msec): min=476, max=12621, avg=2966.81, stdev=3492.52 00:26:02.793 lat (msec): min=488, max=12641, avg=2989.02, stdev=3510.67 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 489], 5.00th=[ 498], 10.00th=[ 498], 20.00th=[ 498], 00:26:02.793 | 30.00th=[ 502], 40.00th=[ 527], 50.00th=[ 625], 60.00th=[ 986], 00:26:02.793 | 70.00th=[ 2970], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:26:02.793 | 99.00th=[ 8926], 99.50th=[10671], 99.90th=[12684], 99.95th=[12684], 00:26:02.793 | 99.99th=[12684] 00:26:02.793 bw ( KiB/s): min= 1432, max=251904, per=2.92%, avg=91311.12, stdev=113731.74, samples=8 00:26:02.793 iops : min= 1, max= 246, avg=88.75, stdev=111.43, samples=8 00:26:02.793 lat (msec) : 500=30.99%, 750=21.69%, 1000=9.50%, >=2000=37.81% 00:26:02.793 cpu : usr=0.02%, sys=1.02%, ctx=606, majf=0, minf=32769 00:26:02.793 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:02.793 issued rwts: total=484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369831: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=4, BW=4941KiB/s (5060kB/s)(52.0MiB/10776msec) 00:26:02.793 slat (usec): min=948, max=2082.8k, avg=204930.95, stdev=598296.46 00:26:02.793 clat (msec): min=118, max=10755, avg=6794.92, stdev=3776.17 00:26:02.793 lat (msec): min=2146, max=10775, avg=6999.86, stdev=3695.02 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 118], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2232], 00:26:02.793 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 8658], 60.00th=[10537], 00:26:02.793 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:26:02.793 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.793 | 99.99th=[10805] 00:26:02.793 lat (msec) : 250=1.92%, >=2000=98.08% 00:26:02.793 cpu : usr=0.02%, sys=0.44%, ctx=102, majf=0, minf=13313 00:26:02.793 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.793 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369832: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=80, BW=80.1MiB/s (84.0MB/s)(1031MiB/12866msec) 00:26:02.793 slat (usec): min=43, max=2086.5k, avg=10427.53, stdev=128613.29 00:26:02.793 clat (msec): min=239, max=10922, avg=1551.49, stdev=3308.53 00:26:02.793 lat (msec): min=240, max=10922, avg=1561.92, stdev=3320.64 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 241], 5.00th=[ 243], 10.00th=[ 243], 20.00th=[ 243], 00:26:02.793 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:26:02.793 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 8557], 95.00th=[10805], 00:26:02.793 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:26:02.793 | 99.99th=[10939] 00:26:02.793 bw ( KiB/s): min= 1432, max=534528, per=5.91%, avg=184967.40, stdev=242562.66, samples=10 00:26:02.793 iops : min= 1, max= 522, avg=180.20, stdev=237.00, samples=10 00:26:02.793 lat (msec) : 250=63.53%, 500=21.73%, >=2000=14.74% 00:26:02.793 cpu : usr=0.07%, sys=1.34%, ctx=949, majf=0, minf=32769 00:26:02.793 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.793 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369833: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=6, BW=7162KiB/s (7334kB/s)(76.0MiB/10866msec) 00:26:02.793 slat (usec): min=415, max=2147.3k, avg=141842.37, stdev=509865.00 00:26:02.793 clat (msec): min=85, max=10864, avg=9703.35, stdev=2462.50 00:26:02.793 lat (msec): min=2173, max=10865, avg=9845.19, stdev=2197.32 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 86], 5.00th=[ 2198], 10.00th=[ 6477], 20.00th=[10537], 00:26:02.793 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:26:02.793 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:26:02.793 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.793 | 99.99th=[10805] 00:26:02.793 lat (msec) : 100=1.32%, >=2000=98.68% 00:26:02.793 cpu : usr=0.00%, sys=0.65%, ctx=119, majf=0, minf=19457 00:26:02.793 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.793 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369834: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=2, BW=2107KiB/s (2157kB/s)(22.0MiB/10694msec) 00:26:02.793 slat (usec): min=1244, max=2125.2k, avg=481779.28, stdev=870796.58 00:26:02.793 clat (msec): min=93, max=10691, avg=6849.89, stdev=3561.06 00:26:02.793 lat (msec): min=2153, max=10692, avg=7331.67, stdev=3311.75 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 94], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:26:02.793 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:26:02.793 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:02.793 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:02.793 | 99.99th=[10671] 00:26:02.793 lat (msec) : 100=4.55%, >=2000=95.45% 00:26:02.793 cpu : usr=0.00%, sys=0.23%, ctx=59, majf=0, minf=5633 00:26:02.793 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.793 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369835: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=26, BW=26.4MiB/s (27.6MB/s)(286MiB/10853msec) 00:26:02.793 slat (usec): min=87, max=2125.1k, avg=37628.33, stdev=224844.86 00:26:02.793 clat (msec): min=88, max=8576, avg=4526.87, stdev=3031.55 00:26:02.793 lat (msec): min=1247, max=8580, avg=4564.50, stdev=3023.96 00:26:02.793 clat percentiles (msec): 00:26:02.793 | 1.00th=[ 1250], 5.00th=[ 1418], 10.00th=[ 1552], 20.00th=[ 1703], 00:26:02.793 | 30.00th=[ 1888], 40.00th=[ 1938], 50.00th=[ 2232], 60.00th=[ 6409], 00:26:02.793 | 70.00th=[ 7953], 80.00th=[ 8221], 90.00th=[ 8423], 95.00th=[ 8490], 00:26:02.793 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:26:02.793 | 99.99th=[ 8557] 00:26:02.793 bw ( KiB/s): min= 2048, max=131072, per=1.29%, avg=40437.62, stdev=50572.87, samples=8 00:26:02.793 iops : min= 2, max= 128, avg=39.13, stdev=49.61, samples=8 00:26:02.793 lat (msec) : 100=0.35%, 2000=47.55%, >=2000=52.10% 00:26:02.793 cpu : usr=0.06%, sys=1.24%, ctx=719, majf=0, minf=32769 00:26:02.793 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=78.0% 00:26:02.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.793 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:02.793 issued rwts: total=286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.793 job3: (groupid=0, jobs=1): err= 0: pid=2369836: Sun Nov 3 15:44:38 2024 00:26:02.793 read: IOPS=2, BW=2776KiB/s (2843kB/s)(29.0MiB/10696msec) 00:26:02.793 slat (usec): min=931, max=2079.6k, avg=365667.92, stdev=778172.02 00:26:02.793 clat (msec): min=90, max=10692, avg=6765.61, stdev=3392.14 00:26:02.793 lat (msec): min=2154, max=10695, avg=7131.28, stdev=3213.80 00:26:02.793 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 91], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2232], 00:26:02.794 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:26:02.794 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:02.794 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:02.794 | 99.99th=[10671] 00:26:02.794 lat (msec) : 100=3.45%, >=2000=96.55% 00:26:02.794 cpu : usr=0.00%, sys=0.28%, ctx=60, majf=0, minf=7425 00:26:02.794 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.794 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job3: (groupid=0, jobs=1): err= 0: pid=2369837: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(223MiB/10877msec) 00:26:02.794 slat (usec): min=180, max=2105.8k, avg=48358.49, stdev=275216.82 00:26:02.794 clat (msec): min=90, max=10002, avg=5898.41, stdev=3779.39 00:26:02.794 lat (msec): min=1247, max=10012, avg=5946.77, stdev=3765.45 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 1250], 5.00th=[ 1351], 10.00th=[ 1435], 20.00th=[ 1485], 00:26:02.794 | 30.00th=[ 1502], 40.00th=[ 3641], 50.00th=[ 8658], 60.00th=[ 8926], 00:26:02.794 | 70.00th=[ 9194], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9866], 00:26:02.794 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:26:02.794 | 99.99th=[10000] 00:26:02.794 bw ( KiB/s): min= 4096, max=83968, per=0.89%, avg=27793.57, stdev=30717.15, samples=7 00:26:02.794 iops : min= 4, max= 82, avg=27.00, stdev=30.11, samples=7 00:26:02.794 lat (msec) : 100=0.45%, 2000=37.67%, >=2000=61.88% 00:26:02.794 cpu : usr=0.03%, sys=1.44%, ctx=553, majf=0, minf=32769 00:26:02.794 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:26:02.794 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job3: (groupid=0, jobs=1): err= 0: pid=2369838: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=8, BW=8651KiB/s (8859kB/s)(109MiB/12902msec) 00:26:02.794 slat (usec): min=720, max=2158.3k, avg=99001.55, stdev=429966.10 00:26:02.794 clat (msec): min=2110, max=12899, avg=10983.03, stdev=2976.48 00:26:02.794 lat (msec): min=4176, max=12901, avg=11082.03, stdev=2855.63 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8423], 00:26:02.794 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:26:02.794 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12953], 00:26:02.794 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:26:02.794 | 99.99th=[12953] 00:26:02.794 lat (msec) : >=2000=100.00% 00:26:02.794 cpu : usr=0.00%, sys=0.97%, ctx=119, majf=0, minf=27905 00:26:02.794 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.794 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job4: (groupid=0, jobs=1): err= 0: pid=2369839: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=1, BW=1996KiB/s (2043kB/s)(21.0MiB/10776msec) 00:26:02.794 slat (usec): min=672, max=2091.4k, avg=507361.16, stdev=884036.81 00:26:02.794 clat (msec): min=121, max=10771, avg=6174.43, stdev=3302.89 00:26:02.794 lat (msec): min=2187, max=10775, avg=6681.79, stdev=3140.93 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 122], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 2232], 00:26:02.794 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 6477], 00:26:02.794 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10671], 00:26:02.794 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.794 | 99.99th=[10805] 00:26:02.794 lat (msec) : 250=4.76%, >=2000=95.24% 00:26:02.794 cpu : usr=0.01%, sys=0.15%, ctx=75, majf=0, minf=5377 00:26:02.794 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.794 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job4: (groupid=0, jobs=1): err= 0: pid=2369840: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=5, BW=5295KiB/s (5423kB/s)(56.0MiB/10829msec) 00:26:02.794 slat (usec): min=756, max=2085.5k, avg=191194.09, stdev=581030.14 00:26:02.794 clat (msec): min=120, max=10825, avg=8719.64, stdev=3178.39 00:26:02.794 lat (msec): min=2168, max=10827, avg=8910.84, stdev=2966.72 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 122], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 6477], 00:26:02.794 | 30.00th=[ 8658], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:26:02.794 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:26:02.794 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.794 | 99.99th=[10805] 00:26:02.794 lat (msec) : 250=1.79%, >=2000=98.21% 00:26:02.794 cpu : usr=0.03%, sys=0.47%, ctx=98, majf=0, minf=14337 00:26:02.794 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.794 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job4: (groupid=0, jobs=1): err= 0: pid=2369841: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=185, BW=186MiB/s (195MB/s)(1864MiB/10022msec) 00:26:02.794 slat (usec): min=34, max=2080.1k, avg=5361.21, stdev=80037.62 00:26:02.794 clat (msec): min=20, max=6647, avg=512.31, stdev=882.25 00:26:02.794 lat (msec): min=23, max=6654, avg=517.67, stdev=893.38 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 63], 5.00th=[ 218], 10.00th=[ 228], 20.00th=[ 230], 00:26:02.794 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 234], 60.00th=[ 239], 00:26:02.794 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 338], 95.00th=[ 2467], 00:26:02.794 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4597], 99.95th=[ 6678], 00:26:02.794 | 99.99th=[ 6678] 00:26:02.794 bw ( KiB/s): min=55296, max=562075, per=14.04%, avg=439574.14, stdev=180366.40, samples=7 00:26:02.794 iops : min= 54, max= 548, avg=429.14, stdev=176.04, samples=7 00:26:02.794 lat (msec) : 50=0.75%, 100=1.29%, 250=63.95%, 500=24.03%, >=2000=9.98% 00:26:02.794 cpu : usr=0.07%, sys=2.34%, ctx=1746, majf=0, minf=32769 00:26:02.794 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.794 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job4: (groupid=0, jobs=1): err= 0: pid=2369842: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=171, BW=172MiB/s (180MB/s)(1867MiB/10882msec) 00:26:02.794 slat (usec): min=42, max=2071.4k, avg=5761.12, stdev=94991.20 00:26:02.794 clat (msec): min=121, max=8820, avg=724.81, stdev=2107.85 00:26:02.794 lat (msec): min=121, max=8821, avg=730.57, stdev=2115.84 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 122], 5.00th=[ 123], 10.00th=[ 123], 20.00th=[ 124], 00:26:02.794 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 125], 60.00th=[ 126], 00:26:02.794 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 435], 95.00th=[ 8792], 00:26:02.794 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:26:02.794 | 99.99th=[ 8792] 00:26:02.794 bw ( KiB/s): min= 2048, max=1054720, per=12.64%, avg=395713.11, stdev=498807.51, samples=9 00:26:02.794 iops : min= 2, max= 1030, avg=386.22, stdev=487.30, samples=9 00:26:02.794 lat (msec) : 250=88.59%, 500=3.70%, >=2000=7.71% 00:26:02.794 cpu : usr=0.08%, sys=1.93%, ctx=1958, majf=0, minf=32331 00:26:02.794 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:02.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.794 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.794 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.794 job4: (groupid=0, jobs=1): err= 0: pid=2369843: Sun Nov 3 15:44:38 2024 00:26:02.794 read: IOPS=60, BW=60.2MiB/s (63.2MB/s)(653MiB/10839msec) 00:26:02.794 slat (usec): min=80, max=2086.4k, avg=16456.87, stdev=154559.08 00:26:02.794 clat (msec): min=87, max=6879, avg=2055.47, stdev=2238.96 00:26:02.794 lat (msec): min=450, max=6883, avg=2071.93, stdev=2243.60 00:26:02.794 clat percentiles (msec): 00:26:02.794 | 1.00th=[ 456], 5.00th=[ 502], 10.00th=[ 506], 20.00th=[ 518], 00:26:02.794 | 30.00th=[ 531], 40.00th=[ 542], 50.00th=[ 550], 60.00th=[ 2140], 00:26:02.794 | 70.00th=[ 2467], 80.00th=[ 2702], 90.00th=[ 6409], 95.00th=[ 6477], 00:26:02.794 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:26:02.794 | 99.99th=[ 6879] 00:26:02.794 bw ( KiB/s): min= 2048, max=251904, per=3.43%, avg=107507.30, stdev=112884.19, samples=10 00:26:02.794 iops : min= 2, max= 246, avg=104.90, stdev=110.28, samples=10 00:26:02.794 lat (msec) : 100=0.15%, 500=4.90%, 750=54.67%, >=2000=40.28% 00:26:02.795 cpu : usr=0.05%, sys=1.48%, ctx=1290, majf=0, minf=32769 00:26:02.795 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:02.795 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369844: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=6, BW=6508KiB/s (6664kB/s)(69.0MiB/10857msec) 00:26:02.795 slat (usec): min=884, max=2085.9k, avg=155988.61, stdev=532092.57 00:26:02.795 clat (msec): min=92, max=10855, avg=9314.28, stdev=2753.22 00:26:02.795 lat (msec): min=2165, max=10856, avg=9470.27, stdev=2517.94 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 93], 5.00th=[ 2232], 10.00th=[ 4396], 20.00th=[ 8658], 00:26:02.795 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10805], 00:26:02.795 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:26:02.795 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.795 | 99.99th=[10805] 00:26:02.795 lat (msec) : 100=1.45%, >=2000=98.55% 00:26:02.795 cpu : usr=0.00%, sys=0.77%, ctx=116, majf=0, minf=17665 00:26:02.795 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:02.795 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369845: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=100, BW=100MiB/s (105MB/s)(1075MiB/10697msec) 00:26:02.795 slat (usec): min=42, max=2055.3k, avg=9826.83, stdev=121533.58 00:26:02.795 clat (msec): min=126, max=6557, avg=618.86, stdev=948.84 00:26:02.795 lat (msec): min=247, max=6565, avg=628.69, stdev=967.10 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 249], 20.00th=[ 251], 00:26:02.795 | 30.00th=[ 253], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 253], 00:26:02.795 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 2366], 95.00th=[ 2433], 00:26:02.795 | 99.00th=[ 4530], 99.50th=[ 6477], 99.90th=[ 6544], 99.95th=[ 6544], 00:26:02.795 | 99.99th=[ 6544] 00:26:02.795 bw ( KiB/s): min= 2043, max=515065, per=10.33%, avg=323262.67, stdev=250369.41, samples=6 00:26:02.795 iops : min= 1, max= 502, avg=315.33, stdev=244.59, samples=6 00:26:02.795 lat (msec) : 250=12.93%, 500=72.19%, >=2000=14.88% 00:26:02.795 cpu : usr=0.04%, sys=1.84%, ctx=983, majf=0, minf=32769 00:26:02.795 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.795 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369846: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=2, BW=2192KiB/s (2245kB/s)(23.0MiB/10744msec) 00:26:02.795 slat (msec): min=2, max=2107, avg=463.03, stdev=857.99 00:26:02.795 clat (msec): min=93, max=10712, avg=6656.64, stdev=3312.92 00:26:02.795 lat (msec): min=2165, max=10743, avg=7119.67, stdev=3090.72 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 94], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:26:02.795 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:26:02.795 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:02.795 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:02.795 | 99.99th=[10671] 00:26:02.795 lat (msec) : 100=4.35%, >=2000=95.65% 00:26:02.795 cpu : usr=0.00%, sys=0.21%, ctx=75, majf=0, minf=5889 00:26:02.795 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.795 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369847: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=51, BW=51.1MiB/s (53.6MB/s)(549MiB/10737msec) 00:26:02.795 slat (usec): min=508, max=2091.4k, avg=19392.04, stdev=172619.93 00:26:02.795 clat (msec): min=86, max=6843, avg=1099.09, stdev=1156.07 00:26:02.795 lat (msec): min=454, max=6854, avg=1118.48, stdev=1181.73 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 451], 5.00th=[ 456], 10.00th=[ 464], 20.00th=[ 485], 00:26:02.795 | 30.00th=[ 489], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 514], 00:26:02.795 | 70.00th=[ 523], 80.00th=[ 2366], 90.00th=[ 2601], 95.00th=[ 2668], 00:26:02.795 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:26:02.795 | 99.99th=[ 6812] 00:26:02.795 bw ( KiB/s): min= 1587, max=262144, per=4.59%, avg=143796.67, stdev=126993.72, samples=6 00:26:02.795 iops : min= 1, max= 256, avg=140.17, stdev=123.97, samples=6 00:26:02.795 lat (msec) : 100=0.18%, 500=45.17%, 750=28.42%, >=2000=26.23% 00:26:02.795 cpu : usr=0.00%, sys=1.24%, ctx=1036, majf=0, minf=32769 00:26:02.795 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:02.795 issued rwts: total=549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369848: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=3, BW=3322KiB/s (3402kB/s)(35.0MiB/10789msec) 00:26:02.795 slat (msec): min=5, max=2091, avg=305.57, stdev=714.67 00:26:02.795 clat (msec): min=93, max=10777, avg=7840.96, stdev=3351.03 00:26:02.795 lat (msec): min=2159, max=10788, avg=8146.53, stdev=3102.20 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 94], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:26:02.795 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10537], 00:26:02.795 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:26:02.795 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.795 | 99.99th=[10805] 00:26:02.795 lat (msec) : 100=2.86%, >=2000=97.14% 00:26:02.795 cpu : usr=0.00%, sys=0.26%, ctx=136, majf=0, minf=8961 00:26:02.795 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:02.795 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.795 job4: (groupid=0, jobs=1): err= 0: pid=2369849: Sun Nov 3 15:44:38 2024 00:26:02.795 read: IOPS=2, BW=2465KiB/s (2524kB/s)(26.0MiB/10801msec) 00:26:02.795 slat (usec): min=1628, max=2090.9k, avg=411709.56, stdev=814551.48 00:26:02.795 clat (msec): min=95, max=10776, avg=7608.67, stdev=3295.62 00:26:02.795 lat (msec): min=2164, max=10800, avg=8020.37, stdev=2972.31 00:26:02.795 clat percentiles (msec): 00:26:02.795 | 1.00th=[ 96], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:26:02.795 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[ 8658], 00:26:02.795 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:26:02.795 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:02.795 | 99.99th=[10805] 00:26:02.795 lat (msec) : 100=3.85%, >=2000=96.15% 00:26:02.795 cpu : usr=0.00%, sys=0.22%, ctx=76, majf=0, minf=6657 00:26:02.795 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:26:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.795 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:02.795 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job4: (groupid=0, jobs=1): err= 0: pid=2369850: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=103, BW=103MiB/s (108MB/s)(1108MiB/10717msec) 00:26:02.796 slat (usec): min=41, max=2059.5k, avg=9561.73, stdev=120358.75 00:26:02.796 clat (msec): min=118, max=6576, avg=606.40, stdev=966.35 00:26:02.796 lat (msec): min=237, max=6584, avg=615.96, stdev=983.90 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 239], 5.00th=[ 241], 10.00th=[ 241], 20.00th=[ 243], 00:26:02.796 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 249], 00:26:02.796 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 2333], 95.00th=[ 2433], 00:26:02.796 | 99.00th=[ 4530], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:26:02.796 | 99.99th=[ 6544] 00:26:02.796 bw ( KiB/s): min= 2048, max=532480, per=10.68%, avg=334511.83, stdev=257920.73, samples=6 00:26:02.796 iops : min= 2, max= 520, avg=326.33, stdev=251.98, samples=6 00:26:02.796 lat (msec) : 250=68.77%, 500=16.79%, >=2000=14.44% 00:26:02.796 cpu : usr=0.01%, sys=1.33%, ctx=1056, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.796 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job4: (groupid=0, jobs=1): err= 0: pid=2369851: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=265, BW=265MiB/s (278MB/s)(2886MiB/10875msec) 00:26:02.796 slat (usec): min=39, max=2022.1k, avg=3722.24, stdev=37828.05 00:26:02.796 clat (msec): min=117, max=2507, avg=463.99, stdev=421.01 00:26:02.796 lat (msec): min=254, max=2508, avg=467.72, stdev=422.24 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 257], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 266], 00:26:02.796 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 380], 60.00th=[ 397], 00:26:02.796 | 70.00th=[ 405], 80.00th=[ 514], 90.00th=[ 535], 95.00th=[ 634], 00:26:02.796 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:26:02.796 | 99.99th=[ 2500] 00:26:02.796 bw ( KiB/s): min=118546, max=501760, per=10.61%, avg=332151.94, stdev=105835.36, samples=17 00:26:02.796 iops : min= 115, max= 490, avg=324.29, stdev=103.46, samples=17 00:26:02.796 lat (msec) : 250=0.03%, 500=79.38%, 750=16.18%, >=2000=4.40% 00:26:02.796 cpu : usr=0.17%, sys=3.21%, ctx=2669, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.796 issued rwts: total=2886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job5: (groupid=0, jobs=1): err= 0: pid=2369852: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=123, BW=123MiB/s (129MB/s)(1331MiB/10790msec) 00:26:02.796 slat (usec): min=48, max=2138.7k, avg=8032.15, stdev=81210.00 00:26:02.796 clat (msec): min=93, max=3278, avg=965.56, stdev=888.88 00:26:02.796 lat (msec): min=270, max=3291, avg=973.59, stdev=891.55 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 271], 5.00th=[ 279], 10.00th=[ 292], 20.00th=[ 305], 00:26:02.796 | 30.00th=[ 321], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 1003], 00:26:02.796 | 70.00th=[ 1062], 80.00th=[ 1284], 90.00th=[ 2534], 95.00th=[ 2903], 00:26:02.796 | 99.00th=[ 3205], 99.50th=[ 3239], 99.90th=[ 3239], 99.95th=[ 3272], 00:26:02.796 | 99.99th=[ 3272] 00:26:02.796 bw ( KiB/s): min= 1452, max=452608, per=5.62%, avg=176056.43, stdev=158283.80, samples=14 00:26:02.796 iops : min= 1, max= 442, avg=171.79, stdev=154.67, samples=14 00:26:02.796 lat (msec) : 100=0.08%, 500=52.89%, 1000=6.61%, 2000=21.34%, >=2000=19.08% 00:26:02.796 cpu : usr=0.04%, sys=1.71%, ctx=1967, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.796 issued rwts: total=1331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job5: (groupid=0, jobs=1): err= 0: pid=2369853: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=164, BW=164MiB/s (172MB/s)(1646MiB/10014msec) 00:26:02.796 slat (usec): min=102, max=2036.6k, avg=6072.80, stdev=80475.76 00:26:02.796 clat (msec): min=13, max=6573, avg=606.21, stdev=786.30 00:26:02.796 lat (msec): min=14, max=6587, avg=612.28, stdev=796.45 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 29], 5.00th=[ 124], 10.00th=[ 209], 20.00th=[ 224], 00:26:02.796 | 30.00th=[ 230], 40.00th=[ 247], 50.00th=[ 313], 60.00th=[ 330], 00:26:02.796 | 70.00th=[ 355], 80.00th=[ 393], 90.00th=[ 2089], 95.00th=[ 2567], 00:26:02.796 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 4665], 99.95th=[ 6544], 00:26:02.796 | 99.99th=[ 6544] 00:26:02.796 bw ( KiB/s): min=45056, max=595968, per=11.03%, avg=345410.56, stdev=181535.88, samples=9 00:26:02.796 iops : min= 44, max= 582, avg=337.22, stdev=177.27, samples=9 00:26:02.796 lat (msec) : 20=0.43%, 50=1.88%, 100=1.94%, 250=36.57%, 500=43.20% 00:26:02.796 lat (msec) : 2000=0.30%, >=2000=15.67% 00:26:02.796 cpu : usr=0.07%, sys=1.62%, ctx=2707, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.796 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job5: (groupid=0, jobs=1): err= 0: pid=2369854: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=82, BW=82.4MiB/s (86.4MB/s)(825MiB/10017msec) 00:26:02.796 slat (usec): min=106, max=2131.9k, avg=12121.51, stdev=125047.10 00:26:02.796 clat (msec): min=13, max=5169, avg=965.02, stdev=1203.24 00:26:02.796 lat (msec): min=17, max=5177, avg=977.14, stdev=1215.33 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 35], 5.00th=[ 123], 10.00th=[ 209], 20.00th=[ 251], 00:26:02.796 | 30.00th=[ 309], 40.00th=[ 338], 50.00th=[ 342], 60.00th=[ 347], 00:26:02.796 | 70.00th=[ 1062], 80.00th=[ 1318], 90.00th=[ 3004], 95.00th=[ 3373], 00:26:02.796 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:26:02.796 | 99.99th=[ 5201] 00:26:02.796 bw ( KiB/s): min=79872, max=392431, per=7.60%, avg=237992.33, stdev=158639.32, samples=6 00:26:02.796 iops : min= 78, max= 383, avg=232.33, stdev=154.83, samples=6 00:26:02.796 lat (msec) : 20=0.36%, 50=1.33%, 100=2.42%, 250=15.76%, 500=48.12% 00:26:02.796 lat (msec) : 1000=0.61%, 2000=12.61%, >=2000=18.79% 00:26:02.796 cpu : usr=0.01%, sys=1.48%, ctx=2230, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.796 issued rwts: total=825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job5: (groupid=0, jobs=1): err= 0: pid=2369855: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=22, BW=22.0MiB/s (23.1MB/s)(238MiB/10794msec) 00:26:02.796 slat (usec): min=886, max=2090.6k, avg=44961.93, stdev=260942.84 00:26:02.796 clat (msec): min=91, max=7239, avg=3046.67, stdev=1702.99 00:26:02.796 lat (msec): min=1109, max=7247, avg=3091.63, stdev=1709.56 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1351], 20.00th=[ 1485], 00:26:02.796 | 30.00th=[ 2165], 40.00th=[ 2467], 50.00th=[ 2769], 60.00th=[ 3205], 00:26:02.796 | 70.00th=[ 3440], 80.00th=[ 3641], 90.00th=[ 5269], 95.00th=[ 7215], 00:26:02.796 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:26:02.796 | 99.99th=[ 7215] 00:26:02.796 bw ( KiB/s): min= 1438, max=90112, per=1.45%, avg=45339.40, stdev=38257.76, samples=5 00:26:02.796 iops : min= 1, max= 88, avg=44.00, stdev=37.70, samples=5 00:26:02.796 lat (msec) : 100=0.42%, 2000=29.41%, >=2000=70.17% 00:26:02.796 cpu : usr=0.02%, sys=1.13%, ctx=1103, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.7%, 32=13.4%, >=64=73.5% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.796 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:26:02.796 issued rwts: total=238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.796 job5: (groupid=0, jobs=1): err= 0: pid=2369856: Sun Nov 3 15:44:38 2024 00:26:02.796 read: IOPS=76, BW=76.6MiB/s (80.3MB/s)(767MiB/10012msec) 00:26:02.796 slat (usec): min=417, max=2229.4k, avg=13035.43, stdev=131896.08 00:26:02.796 clat (msec): min=10, max=8921, avg=1244.47, stdev=1755.54 00:26:02.796 lat (msec): min=11, max=8956, avg=1257.50, stdev=1771.62 00:26:02.796 clat percentiles (msec): 00:26:02.796 | 1.00th=[ 16], 5.00th=[ 41], 10.00th=[ 121], 20.00th=[ 317], 00:26:02.796 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 363], 60.00th=[ 443], 00:26:02.796 | 70.00th=[ 944], 80.00th=[ 1011], 90.00th=[ 5000], 95.00th=[ 5201], 00:26:02.796 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:02.796 | 99.99th=[ 8926] 00:26:02.796 bw ( KiB/s): min= 8192, max=364544, per=4.73%, avg=148025.33, stdev=126007.99, samples=6 00:26:02.796 iops : min= 8, max= 356, avg=144.33, stdev=123.02, samples=6 00:26:02.796 lat (msec) : 20=1.69%, 50=4.04%, 100=3.26%, 250=6.65%, 500=48.24% 00:26:02.796 lat (msec) : 1000=14.47%, 2000=3.65%, >=2000=17.99% 00:26:02.796 cpu : usr=0.01%, sys=1.27%, ctx=2097, majf=0, minf=32769 00:26:02.796 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:26:02.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:02.797 issued rwts: total=767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369857: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=100, BW=100MiB/s (105MB/s)(1075MiB/10718msec) 00:26:02.797 slat (usec): min=42, max=2089.2k, avg=9859.70, stdev=106865.62 00:26:02.797 clat (msec): min=113, max=4873, avg=793.16, stdev=783.98 00:26:02.797 lat (msec): min=362, max=4881, avg=803.02, stdev=794.72 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 363], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 376], 00:26:02.797 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 684], 00:26:02.797 | 70.00th=[ 709], 80.00th=[ 726], 90.00th=[ 2333], 95.00th=[ 2500], 00:26:02.797 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:26:02.797 | 99.99th=[ 4866] 00:26:02.797 bw ( KiB/s): min= 2043, max=351552, per=6.19%, avg=193939.90, stdev=141237.59, samples=10 00:26:02.797 iops : min= 1, max= 343, avg=189.10, stdev=138.01, samples=10 00:26:02.797 lat (msec) : 250=0.09%, 500=52.28%, 750=31.35%, 1000=2.70%, >=2000=13.58% 00:26:02.797 cpu : usr=0.02%, sys=1.72%, ctx=1689, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.797 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369858: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=100, BW=101MiB/s (105MB/s)(1007MiB/10016msec) 00:26:02.797 slat (usec): min=43, max=2147.3k, avg=9927.44, stdev=92233.04 00:26:02.797 clat (msec): min=13, max=3685, avg=1166.81, stdev=1135.01 00:26:02.797 lat (msec): min=16, max=3693, avg=1176.74, stdev=1139.48 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 29], 5.00th=[ 128], 10.00th=[ 268], 20.00th=[ 355], 00:26:02.797 | 30.00th=[ 359], 40.00th=[ 368], 50.00th=[ 393], 60.00th=[ 1028], 00:26:02.797 | 70.00th=[ 1351], 80.00th=[ 2735], 90.00th=[ 3071], 95.00th=[ 3272], 00:26:02.797 | 99.00th=[ 3574], 99.50th=[ 3608], 99.90th=[ 3675], 99.95th=[ 3675], 00:26:02.797 | 99.99th=[ 3675] 00:26:02.797 bw ( KiB/s): min=18432, max=374784, per=5.23%, avg=163750.27, stdev=127228.00, samples=11 00:26:02.797 iops : min= 18, max= 366, avg=159.64, stdev=124.39, samples=11 00:26:02.797 lat (msec) : 20=0.50%, 50=1.69%, 100=1.89%, 250=5.36%, 500=45.28% 00:26:02.797 lat (msec) : 750=2.18%, 1000=2.88%, 2000=15.00%, >=2000=25.22% 00:26:02.797 cpu : usr=0.04%, sys=1.60%, ctx=2695, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.797 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369859: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=234, BW=235MiB/s (246MB/s)(2348MiB/10009msec) 00:26:02.797 slat (usec): min=38, max=2081.6k, avg=4255.86, stdev=71521.10 00:26:02.797 clat (msec): min=8, max=4611, avg=429.55, stdev=973.65 00:26:02.797 lat (msec): min=9, max=4613, avg=433.81, stdev=978.33 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 22], 5.00th=[ 104], 10.00th=[ 125], 20.00th=[ 126], 00:26:02.797 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 241], 60.00th=[ 245], 00:26:02.797 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 259], 95.00th=[ 2467], 00:26:02.797 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:26:02.797 | 99.99th=[ 4597] 00:26:02.797 bw ( KiB/s): min= 8175, max=1038336, per=12.38%, avg=387529.22, stdev=326006.09, samples=9 00:26:02.797 iops : min= 7, max= 1014, avg=378.22, stdev=318.45, samples=9 00:26:02.797 lat (msec) : 10=0.13%, 20=0.77%, 50=2.17%, 100=1.79%, 250=73.68% 00:26:02.797 lat (msec) : 500=15.03%, >=2000=6.43% 00:26:02.797 cpu : usr=0.08%, sys=2.09%, ctx=2513, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.797 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369860: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=63, BW=63.8MiB/s (66.9MB/s)(682MiB/10694msec) 00:26:02.797 slat (usec): min=74, max=2072.4k, avg=15496.19, stdev=132774.99 00:26:02.797 clat (msec): min=119, max=4870, avg=1205.59, stdev=829.36 00:26:02.797 lat (msec): min=709, max=4884, avg=1221.08, stdev=841.05 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 709], 5.00th=[ 709], 10.00th=[ 718], 20.00th=[ 726], 00:26:02.797 | 30.00th=[ 743], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:26:02.797 | 70.00th=[ 869], 80.00th=[ 2265], 90.00th=[ 2735], 95.00th=[ 2903], 00:26:02.797 | 99.00th=[ 3104], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:26:02.797 | 99.99th=[ 4866] 00:26:02.797 bw ( KiB/s): min= 2043, max=182272, per=3.63%, avg=113611.40, stdev=70418.60, samples=10 00:26:02.797 iops : min= 1, max= 178, avg=110.70, stdev=68.91, samples=10 00:26:02.797 lat (msec) : 250=0.15%, 750=32.70%, 1000=46.63%, >=2000=20.53% 00:26:02.797 cpu : usr=0.07%, sys=1.49%, ctx=1678, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:02.797 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369861: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=19, BW=19.4MiB/s (20.4MB/s)(208MiB/10717msec) 00:26:02.797 slat (usec): min=648, max=2131.9k, avg=50907.57, stdev=282815.89 00:26:02.797 clat (msec): min=127, max=5825, avg=4037.32, stdev=1664.11 00:26:02.797 lat (msec): min=1450, max=5829, avg=4088.23, stdev=1635.12 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 1452], 5.00th=[ 1452], 10.00th=[ 1469], 20.00th=[ 1519], 00:26:02.797 | 30.00th=[ 3540], 40.00th=[ 4530], 50.00th=[ 4732], 60.00th=[ 4933], 00:26:02.797 | 70.00th=[ 5336], 80.00th=[ 5470], 90.00th=[ 5671], 95.00th=[ 5738], 00:26:02.797 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:26:02.797 | 99.99th=[ 5805] 00:26:02.797 bw ( KiB/s): min= 2043, max=79872, per=0.88%, avg=27664.00, stdev=33090.40, samples=6 00:26:02.797 iops : min= 1, max= 78, avg=26.67, stdev=32.58, samples=6 00:26:02.797 lat (msec) : 250=0.48%, 2000=25.48%, >=2000=74.04% 00:26:02.797 cpu : usr=0.02%, sys=0.87%, ctx=742, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:26:02.797 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369862: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=79, BW=79.1MiB/s (82.9MB/s)(792MiB/10017msec) 00:26:02.797 slat (usec): min=468, max=2145.8k, avg=12624.58, stdev=125349.14 00:26:02.797 clat (msec): min=15, max=5363, avg=1112.35, stdev=1499.57 00:26:02.797 lat (msec): min=17, max=5371, avg=1124.97, stdev=1510.55 00:26:02.797 clat percentiles (msec): 00:26:02.797 | 1.00th=[ 37], 5.00th=[ 110], 10.00th=[ 194], 20.00th=[ 262], 00:26:02.797 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 292], 00:26:02.797 | 70.00th=[ 401], 80.00th=[ 2903], 90.00th=[ 3540], 95.00th=[ 5067], 00:26:02.797 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:26:02.797 | 99.99th=[ 5336] 00:26:02.797 bw ( KiB/s): min=38912, max=468992, per=7.25%, avg=226911.67, stdev=193115.08, samples=6 00:26:02.797 iops : min= 38, max= 458, avg=221.50, stdev=188.59, samples=6 00:26:02.797 lat (msec) : 20=0.25%, 50=1.26%, 100=3.03%, 250=8.33%, 500=58.08% 00:26:02.797 lat (msec) : 2000=7.20%, >=2000=21.84% 00:26:02.797 cpu : usr=0.00%, sys=1.62%, ctx=2203, majf=0, minf=32769 00:26:02.797 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.0% 00:26:02.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.797 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:02.797 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.797 job5: (groupid=0, jobs=1): err= 0: pid=2369863: Sun Nov 3 15:44:38 2024 00:26:02.797 read: IOPS=170, BW=171MiB/s (179MB/s)(2195MiB/12855msec) 00:26:02.797 slat (usec): min=37, max=2115.1k, avg=4870.98, stdev=74674.31 00:26:02.798 clat (msec): min=218, max=4545, avg=578.07, stdev=1108.99 00:26:02.798 lat (msec): min=219, max=4547, avg=582.94, stdev=1114.53 00:26:02.798 clat percentiles (msec): 00:26:02.798 | 1.00th=[ 220], 5.00th=[ 222], 10.00th=[ 222], 20.00th=[ 226], 00:26:02.798 | 30.00th=[ 230], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:26:02.798 | 70.00th=[ 251], 80.00th=[ 292], 90.00th=[ 342], 95.00th=[ 4396], 00:26:02.798 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:26:02.798 | 99.99th=[ 4530] 00:26:02.798 bw ( KiB/s): min= 1432, max=585728, per=13.52%, avg=423347.30, stdev=209742.07, samples=10 00:26:02.798 iops : min= 1, max= 572, avg=413.20, stdev=205.03, samples=10 00:26:02.798 lat (msec) : 250=68.56%, 500=23.19%, >=2000=8.25% 00:26:02.798 cpu : usr=0.10%, sys=2.25%, ctx=2004, majf=0, minf=32769 00:26:02.798 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:02.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.798 issued rwts: total=2195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.798 job5: (groupid=0, jobs=1): err= 0: pid=2369864: Sun Nov 3 15:44:38 2024 00:26:02.798 read: IOPS=117, BW=117MiB/s (123MB/s)(1276MiB/10881msec) 00:26:02.798 slat (usec): min=41, max=2105.7k, avg=8421.22, stdev=98002.51 00:26:02.798 clat (msec): min=127, max=6771, avg=940.81, stdev=1052.64 00:26:02.798 lat (msec): min=246, max=6781, avg=949.24, stdev=1062.31 00:26:02.798 clat percentiles (msec): 00:26:02.798 | 1.00th=[ 249], 5.00th=[ 251], 10.00th=[ 251], 20.00th=[ 253], 00:26:02.798 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 255], 60.00th=[ 506], 00:26:02.798 | 70.00th=[ 860], 80.00th=[ 2366], 90.00th=[ 2735], 95.00th=[ 3104], 00:26:02.798 | 99.00th=[ 3171], 99.50th=[ 3205], 99.90th=[ 4866], 99.95th=[ 6745], 00:26:02.798 | 99.99th=[ 6745] 00:26:02.798 bw ( KiB/s): min= 2048, max=516096, per=7.51%, avg=234961.80, stdev=198387.06, samples=10 00:26:02.798 iops : min= 2, max= 504, avg=229.20, stdev=193.69, samples=10 00:26:02.798 lat (msec) : 250=2.98%, 500=56.97%, 750=7.45%, 1000=8.46%, 2000=0.08% 00:26:02.798 lat (msec) : >=2000=24.06% 00:26:02.798 cpu : usr=0.06%, sys=2.01%, ctx=2066, majf=0, minf=32769 00:26:02.798 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:26:02.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.798 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.798 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.798 00:26:02.798 Run status group 0 (all jobs): 00:26:02.798 READ: bw=3057MiB/s (3206MB/s), 1205KiB/s-274MiB/s (1234kB/s-288MB/s), io=38.6GiB (41.4GB), run=10009-12912msec 00:26:02.798 00:26:02.798 Disk stats (read/write): 00:26:02.798 nvme0n1: ios=20411/0, merge=0/0, ticks=9428440/0, in_queue=9428440, util=98.34% 00:26:02.798 nvme1n1: ios=31983/0, merge=0/0, ticks=8839971/0, in_queue=8839971, util=98.39% 00:26:02.798 nvme2n1: ios=30749/0, merge=0/0, ticks=9115374/0, in_queue=9115374, util=98.76% 00:26:02.798 nvme3n1: ios=34923/0, merge=0/0, ticks=7981971/0, in_queue=7981971, util=98.94% 00:26:02.798 nvme4n1: ios=81187/0, merge=0/0, ticks=7715824/0, in_queue=7715824, util=99.02% 00:26:02.798 nvme5n1: ios=115114/0, merge=0/0, ticks=8173224/0, in_queue=8173224, util=99.30% 00:26:02.798 15:44:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:26:02.798 15:44:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:26:02.798 15:44:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:02.798 15:44:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:26:02.798 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000000 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000000 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:02.798 15:44:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:03.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000001 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000001 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:03.366 15:44:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:04.303 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000002 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:04.303 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000002 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:04.304 15:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:05.241 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000003 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000003 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:05.500 15:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:06.437 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000004 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000004 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:06.437 15:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.374 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.374 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:26:07.374 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:26:07.374 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000005 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000005 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:07.375 rmmod nvme_rdma 00:26:07.375 rmmod nvme_fabrics 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 2368451 ']' 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 2368451 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' -z 2368451 ']' 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # kill -0 2368451 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # uname 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:07.375 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2368451 00:26:07.634 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:07.634 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:07.634 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2368451' 00:26:07.634 killing process with pid 2368451 00:26:07.634 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@971 -- # kill 2368451 00:26:07.634 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@976 -- # wait 2368451 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:07.894 00:26:07.894 real 0m33.994s 00:26:07.894 user 1m59.051s 00:26:07.894 sys 0m15.938s 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:07.894 ************************************ 00:26:07.894 END TEST nvmf_srq_overwhelm 00:26:07.894 ************************************ 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:07.894 ************************************ 00:26:07.894 START TEST nvmf_shutdown 00:26:07.894 ************************************ 00:26:07.894 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:26:07.894 * Looking for test storage... 00:26:08.224 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.224 --rc genhtml_branch_coverage=1 00:26:08.224 --rc genhtml_function_coverage=1 00:26:08.224 --rc genhtml_legend=1 00:26:08.224 --rc geninfo_all_blocks=1 00:26:08.224 --rc geninfo_unexecuted_blocks=1 00:26:08.224 00:26:08.224 ' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.224 --rc genhtml_branch_coverage=1 00:26:08.224 --rc genhtml_function_coverage=1 00:26:08.224 --rc genhtml_legend=1 00:26:08.224 --rc geninfo_all_blocks=1 00:26:08.224 --rc geninfo_unexecuted_blocks=1 00:26:08.224 00:26:08.224 ' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.224 --rc genhtml_branch_coverage=1 00:26:08.224 --rc genhtml_function_coverage=1 00:26:08.224 --rc genhtml_legend=1 00:26:08.224 --rc geninfo_all_blocks=1 00:26:08.224 --rc geninfo_unexecuted_blocks=1 00:26:08.224 00:26:08.224 ' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.224 --rc genhtml_branch_coverage=1 00:26:08.224 --rc genhtml_function_coverage=1 00:26:08.224 --rc genhtml_legend=1 00:26:08.224 --rc geninfo_all_blocks=1 00:26:08.224 --rc geninfo_unexecuted_blocks=1 00:26:08.224 00:26:08.224 ' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.224 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.225 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.225 ************************************ 00:26:08.225 START TEST nvmf_shutdown_tc1 00:26:08.225 ************************************ 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:08.225 15:44:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.801 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:14.802 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:14.802 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:14.802 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:14.802 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:14.802 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:14.803 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:14.803 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:14.803 altname enp217s0f0np0 00:26:14.803 altname ens818f0np0 00:26:14.803 inet 192.168.100.8/24 scope global mlx_0_0 00:26:14.803 valid_lft forever preferred_lft forever 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:14.803 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:14.803 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:14.803 altname enp217s0f1np1 00:26:14.803 altname ens818f1np1 00:26:14.803 inet 192.168.100.9/24 scope global mlx_0_1 00:26:14.803 valid_lft forever preferred_lft forever 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:14.803 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:15.063 192.168.100.9' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:15.063 192.168.100.9' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:15.063 192.168.100.9' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2376439 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2376439 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2376439 ']' 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.063 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:15.063 [2024-11-03 15:44:52.715147] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:15.063 [2024-11-03 15:44:52.715200] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.063 [2024-11-03 15:44:52.792637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.063 [2024-11-03 15:44:52.815031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.063 [2024-11-03 15:44:52.815072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.063 [2024-11-03 15:44:52.815081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.063 [2024-11-03 15:44:52.815090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.063 [2024-11-03 15:44:52.815097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.063 [2024-11-03 15:44:52.816905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.063 [2024-11-03 15:44:52.816991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.063 [2024-11-03 15:44:52.817099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.063 [2024-11-03 15:44:52.817100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.323 15:44:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.323 [2024-11-03 15:44:52.980556] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x166af50/0x166f400) succeed. 00:26:15.323 [2024-11-03 15:44:52.989720] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x166c590/0x16b0aa0) succeed. 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 Malloc1 00:26:15.583 [2024-11-03 15:44:53.226147] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:15.583 Malloc2 00:26:15.583 Malloc3 00:26:15.583 Malloc4 00:26:15.842 Malloc5 00:26:15.842 Malloc6 00:26:15.842 Malloc7 00:26:15.842 Malloc8 00:26:15.842 Malloc9 00:26:15.842 Malloc10 00:26:15.842 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:15.842 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:15.842 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2376745 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2376745 /var/tmp/bdevperf.sock 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2376745 ']' 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 [2024-11-03 15:44:53.712362] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:16.100 [2024-11-03 15:44:53.712417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.100 { 00:26:16.100 "params": { 00:26:16.100 "name": "Nvme$subsystem", 00:26:16.100 "trtype": "$TEST_TRANSPORT", 00:26:16.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.100 "adrfam": "ipv4", 00:26:16.100 "trsvcid": "$NVMF_PORT", 00:26:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.100 "hdgst": ${hdgst:-false}, 00:26:16.100 "ddgst": ${ddgst:-false} 00:26:16.100 }, 00:26:16.100 "method": "bdev_nvme_attach_controller" 00:26:16.100 } 00:26:16.100 EOF 00:26:16.100 )") 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.100 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.101 { 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme$subsystem", 00:26:16.101 "trtype": "$TEST_TRANSPORT", 00:26:16.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "$NVMF_PORT", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.101 "hdgst": ${hdgst:-false}, 00:26:16.101 "ddgst": ${ddgst:-false} 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 } 00:26:16.101 EOF 00:26:16.101 )") 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.101 { 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme$subsystem", 00:26:16.101 "trtype": "$TEST_TRANSPORT", 00:26:16.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "$NVMF_PORT", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.101 "hdgst": ${hdgst:-false}, 00:26:16.101 "ddgst": ${ddgst:-false} 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 } 00:26:16.101 EOF 00:26:16.101 )") 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:16.101 15:44:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme1", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme2", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme3", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme4", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme5", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme6", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme7", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme8", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme9", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 },{ 00:26:16.101 "params": { 00:26:16.101 "name": "Nvme10", 00:26:16.101 "trtype": "rdma", 00:26:16.101 "traddr": "192.168.100.8", 00:26:16.101 "adrfam": "ipv4", 00:26:16.101 "trsvcid": "4420", 00:26:16.101 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:16.101 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:16.101 "hdgst": false, 00:26:16.101 "ddgst": false 00:26:16.101 }, 00:26:16.101 "method": "bdev_nvme_attach_controller" 00:26:16.101 }' 00:26:16.101 [2024-11-03 15:44:53.794478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.101 [2024-11-03 15:44:53.816827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2376745 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:17.037 15:44:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:17.974 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2376745 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2376439 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.974 { 00:26:17.974 "params": { 00:26:17.974 "name": "Nvme$subsystem", 00:26:17.974 "trtype": "$TEST_TRANSPORT", 00:26:17.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.974 "adrfam": "ipv4", 00:26:17.974 "trsvcid": "$NVMF_PORT", 00:26:17.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.974 "hdgst": ${hdgst:-false}, 00:26:17.974 "ddgst": ${ddgst:-false} 00:26:17.974 }, 00:26:17.974 "method": "bdev_nvme_attach_controller" 00:26:17.974 } 00:26:17.974 EOF 00:26:17.974 )") 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.974 { 00:26:17.974 "params": { 00:26:17.974 "name": "Nvme$subsystem", 00:26:17.974 "trtype": "$TEST_TRANSPORT", 00:26:17.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.974 "adrfam": "ipv4", 00:26:17.974 "trsvcid": "$NVMF_PORT", 00:26:17.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.974 "hdgst": ${hdgst:-false}, 00:26:17.974 "ddgst": ${ddgst:-false} 00:26:17.974 }, 00:26:17.974 "method": "bdev_nvme_attach_controller" 00:26:17.974 } 00:26:17.974 EOF 00:26:17.974 )") 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.974 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.974 { 00:26:17.974 "params": { 00:26:17.974 "name": "Nvme$subsystem", 00:26:17.974 "trtype": "$TEST_TRANSPORT", 00:26:17.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.974 "adrfam": "ipv4", 00:26:17.974 "trsvcid": "$NVMF_PORT", 00:26:17.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 [2024-11-03 15:44:55.725159] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:17.975 [2024-11-03 15:44:55.725215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377047 ] 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.975 { 00:26:17.975 "params": { 00:26:17.975 "name": "Nvme$subsystem", 00:26:17.975 "trtype": "$TEST_TRANSPORT", 00:26:17.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.975 "adrfam": "ipv4", 00:26:17.975 "trsvcid": "$NVMF_PORT", 00:26:17.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.975 "hdgst": ${hdgst:-false}, 00:26:17.975 "ddgst": ${ddgst:-false} 00:26:17.975 }, 00:26:17.975 "method": "bdev_nvme_attach_controller" 00:26:17.975 } 00:26:17.975 EOF 00:26:17.975 )") 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:17.975 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:18.235 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:18.235 15:44:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme1", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme2", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme3", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme4", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme5", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme6", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme7", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme8", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme9", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 },{ 00:26:18.235 "params": { 00:26:18.235 "name": "Nvme10", 00:26:18.235 "trtype": "rdma", 00:26:18.235 "traddr": "192.168.100.8", 00:26:18.235 "adrfam": "ipv4", 00:26:18.235 "trsvcid": "4420", 00:26:18.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:18.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:18.235 "hdgst": false, 00:26:18.235 "ddgst": false 00:26:18.235 }, 00:26:18.235 "method": "bdev_nvme_attach_controller" 00:26:18.235 }' 00:26:18.235 [2024-11-03 15:44:55.807458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.235 [2024-11-03 15:44:55.829938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.173 Running I/O for 1 seconds... 00:26:20.111 3653.00 IOPS, 228.31 MiB/s 00:26:20.111 Latency(us) 00:26:20.111 [2024-11-03T14:44:57.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.111 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme1n1 : 1.18 381.78 23.86 0.00 0.00 165693.53 5767.17 189582.54 00:26:20.111 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme2n1 : 1.18 382.25 23.89 0.00 0.00 162926.88 6186.60 182871.65 00:26:20.111 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme3n1 : 1.18 379.33 23.71 0.00 0.00 162026.17 12530.48 176160.77 00:26:20.111 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme4n1 : 1.18 400.98 25.06 0.00 0.00 151032.63 4954.52 131701.15 00:26:20.111 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme5n1 : 1.18 378.69 23.67 0.00 0.00 157455.74 12582.91 155189.25 00:26:20.111 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme6n1 : 1.17 382.40 23.90 0.00 0.00 154973.92 21181.24 117440.51 00:26:20.111 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme7n1 : 1.18 401.98 25.12 0.00 0.00 144681.62 10800.33 110729.63 00:26:20.111 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme8n1 : 1.18 383.16 23.95 0.00 0.00 149181.88 10590.62 104438.17 00:26:20.111 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme9n1 : 1.18 381.18 23.82 0.00 0.00 149028.04 11272.19 97727.28 00:26:20.111 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:20.111 Verification LBA range: start 0x0 length 0x400 00:26:20.111 Nvme10n1 : 1.18 326.21 20.39 0.00 0.00 171459.79 11062.48 195454.57 00:26:20.111 [2024-11-03T14:44:57.901Z] =================================================================================================================== 00:26:20.111 [2024-11-03T14:44:57.901Z] Total : 3797.96 237.37 0.00 0.00 156525.32 4954.52 195454.57 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:20.370 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:20.630 rmmod nvme_rdma 00:26:20.630 rmmod nvme_fabrics 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2376439 ']' 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2376439 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2376439 ']' 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2376439 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2376439 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2376439' 00:26:20.630 killing process with pid 2376439 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2376439 00:26:20.630 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2376439 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:21.199 00:26:21.199 real 0m12.847s 00:26:21.199 user 0m27.980s 00:26:21.199 sys 0m6.198s 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:21.199 ************************************ 00:26:21.199 END TEST nvmf_shutdown_tc1 00:26:21.199 ************************************ 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:21.199 ************************************ 00:26:21.199 START TEST nvmf_shutdown_tc2 00:26:21.199 ************************************ 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:21.199 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:21.200 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:21.200 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:21.200 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:21.200 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:21.200 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:21.201 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.201 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:21.201 altname enp217s0f0np0 00:26:21.201 altname ens818f0np0 00:26:21.201 inet 192.168.100.8/24 scope global mlx_0_0 00:26:21.201 valid_lft forever preferred_lft forever 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:21.201 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.201 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:21.201 altname enp217s0f1np1 00:26:21.201 altname ens818f1np1 00:26:21.201 inet 192.168.100.9/24 scope global mlx_0_1 00:26:21.201 valid_lft forever preferred_lft forever 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:21.201 192.168.100.9' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:21.201 192.168.100.9' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:21.201 192.168.100.9' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:21.201 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:21.461 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:21.461 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.461 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.461 15:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.461 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2377684 00:26:21.461 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2377684 00:26:21.461 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2377684 ']' 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.462 [2024-11-03 15:44:59.035903] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:21.462 [2024-11-03 15:44:59.035950] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.462 [2024-11-03 15:44:59.113582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.462 [2024-11-03 15:44:59.135792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.462 [2024-11-03 15:44:59.135833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.462 [2024-11-03 15:44:59.135842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.462 [2024-11-03 15:44:59.135850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.462 [2024-11-03 15:44:59.135857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.462 [2024-11-03 15:44:59.137488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.462 [2024-11-03 15:44:59.137572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.462 [2024-11-03 15:44:59.137682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.462 [2024-11-03 15:44:59.137683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.462 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.722 [2024-11-03 15:44:59.293352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd54f50/0xd59400) succeed. 00:26:21.722 [2024-11-03 15:44:59.302385] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd56590/0xd9aaa0) succeed. 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.722 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:21.981 Malloc1 00:26:21.981 [2024-11-03 15:44:59.542431] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:21.981 Malloc2 00:26:21.981 Malloc3 00:26:21.981 Malloc4 00:26:21.981 Malloc5 00:26:21.981 Malloc6 00:26:22.241 Malloc7 00:26:22.241 Malloc8 00:26:22.241 Malloc9 00:26:22.241 Malloc10 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2377952 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2377952 /var/tmp/bdevperf.sock 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2377952 ']' 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.241 15:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.241 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.241 { 00:26:22.241 "params": { 00:26:22.241 "name": "Nvme$subsystem", 00:26:22.241 "trtype": "$TEST_TRANSPORT", 00:26:22.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.241 "adrfam": "ipv4", 00:26:22.241 "trsvcid": "$NVMF_PORT", 00:26:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.241 "hdgst": ${hdgst:-false}, 00:26:22.241 "ddgst": ${ddgst:-false} 00:26:22.241 }, 00:26:22.241 "method": "bdev_nvme_attach_controller" 00:26:22.241 } 00:26:22.241 EOF 00:26:22.241 )") 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.502 [2024-11-03 15:45:00.030409] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:22.502 [2024-11-03 15:45:00.030468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377952 ] 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.502 { 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme$subsystem", 00:26:22.502 "trtype": "$TEST_TRANSPORT", 00:26:22.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "$NVMF_PORT", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.502 "hdgst": ${hdgst:-false}, 00:26:22.502 "ddgst": ${ddgst:-false} 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 } 00:26:22.502 EOF 00:26:22.502 )") 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.502 { 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme$subsystem", 00:26:22.502 "trtype": "$TEST_TRANSPORT", 00:26:22.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "$NVMF_PORT", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.502 "hdgst": ${hdgst:-false}, 00:26:22.502 "ddgst": ${ddgst:-false} 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 } 00:26:22.502 EOF 00:26:22.502 )") 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.502 { 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme$subsystem", 00:26:22.502 "trtype": "$TEST_TRANSPORT", 00:26:22.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "$NVMF_PORT", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.502 "hdgst": ${hdgst:-false}, 00:26:22.502 "ddgst": ${ddgst:-false} 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 } 00:26:22.502 EOF 00:26:22.502 )") 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:22.502 { 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme$subsystem", 00:26:22.502 "trtype": "$TEST_TRANSPORT", 00:26:22.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "$NVMF_PORT", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.502 "hdgst": ${hdgst:-false}, 00:26:22.502 "ddgst": ${ddgst:-false} 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 } 00:26:22.502 EOF 00:26:22.502 )") 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:22.502 15:45:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme1", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme2", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme3", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme4", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme5", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme6", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme7", 00:26:22.502 "trtype": "rdma", 00:26:22.502 "traddr": "192.168.100.8", 00:26:22.502 "adrfam": "ipv4", 00:26:22.502 "trsvcid": "4420", 00:26:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:22.502 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:22.502 "hdgst": false, 00:26:22.502 "ddgst": false 00:26:22.502 }, 00:26:22.502 "method": "bdev_nvme_attach_controller" 00:26:22.502 },{ 00:26:22.502 "params": { 00:26:22.502 "name": "Nvme8", 00:26:22.502 "trtype": "rdma", 00:26:22.503 "traddr": "192.168.100.8", 00:26:22.503 "adrfam": "ipv4", 00:26:22.503 "trsvcid": "4420", 00:26:22.503 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:22.503 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:22.503 "hdgst": false, 00:26:22.503 "ddgst": false 00:26:22.503 }, 00:26:22.503 "method": "bdev_nvme_attach_controller" 00:26:22.503 },{ 00:26:22.503 "params": { 00:26:22.503 "name": "Nvme9", 00:26:22.503 "trtype": "rdma", 00:26:22.503 "traddr": "192.168.100.8", 00:26:22.503 "adrfam": "ipv4", 00:26:22.503 "trsvcid": "4420", 00:26:22.503 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:22.503 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:22.503 "hdgst": false, 00:26:22.503 "ddgst": false 00:26:22.503 }, 00:26:22.503 "method": "bdev_nvme_attach_controller" 00:26:22.503 },{ 00:26:22.503 "params": { 00:26:22.503 "name": "Nvme10", 00:26:22.503 "trtype": "rdma", 00:26:22.503 "traddr": "192.168.100.8", 00:26:22.503 "adrfam": "ipv4", 00:26:22.503 "trsvcid": "4420", 00:26:22.503 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:22.503 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:22.503 "hdgst": false, 00:26:22.503 "ddgst": false 00:26:22.503 }, 00:26:22.503 "method": "bdev_nvme_attach_controller" 00:26:22.503 }' 00:26:22.503 [2024-11-03 15:45:00.116890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.503 [2024-11-03 15:45:00.139491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.441 Running I/O for 10 seconds... 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.442 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.701 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.701 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:23.701 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:23.701 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=154 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 154 -ge 100 ']' 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2377952 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2377952 ']' 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2377952 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2377952 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2377952' 00:26:23.961 killing process with pid 2377952 00:26:23.961 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2377952 00:26:24.221 15:45:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2377952 00:26:24.221 Received shutdown signal, test time was about 0.788908 seconds 00:26:24.221 00:26:24.221 Latency(us) 00:26:24.221 [2024-11-03T14:45:02.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme1n1 : 0.77 360.22 22.51 0.00 0.00 173311.29 8755.61 198810.01 00:26:24.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme2n1 : 0.78 350.76 21.92 0.00 0.00 173986.71 8703.18 185388.24 00:26:24.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme3n1 : 0.78 370.82 23.18 0.00 0.00 161746.67 8860.47 178677.35 00:26:24.221 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme4n1 : 0.78 411.36 25.71 0.00 0.00 142826.54 5400.17 128345.70 00:26:24.221 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme5n1 : 0.78 410.52 25.66 0.00 0.00 140720.05 9542.04 115762.79 00:26:24.221 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme6n1 : 0.78 409.75 25.61 0.00 0.00 137720.71 10590.62 105277.03 00:26:24.221 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme7n1 : 0.78 409.09 25.57 0.00 0.00 134494.95 11010.05 97307.85 00:26:24.221 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme8n1 : 0.78 408.20 25.51 0.00 0.00 132694.51 11848.91 94791.27 00:26:24.221 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme9n1 : 0.79 406.29 25.39 0.00 0.00 130113.13 2949.12 109051.90 00:26:24.221 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:24.221 Verification LBA range: start 0x0 length 0x400 00:26:24.221 Nvme10n1 : 0.77 248.24 15.52 0.00 0.00 207359.45 8441.04 295279.00 00:26:24.221 [2024-11-03T14:45:02.011Z] =================================================================================================================== 00:26:24.221 [2024-11-03T14:45:02.011Z] Total : 3785.24 236.58 0.00 0.00 150463.11 2949.12 295279.00 00:26:24.480 15:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2377684 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:25.418 rmmod nvme_rdma 00:26:25.418 rmmod nvme_fabrics 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2377684 ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2377684 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2377684 ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2377684 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2377684 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:25.418 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2377684' 00:26:25.419 killing process with pid 2377684 00:26:25.419 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2377684 00:26:25.419 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2377684 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:25.989 00:26:25.989 real 0m4.823s 00:26:25.989 user 0m19.428s 00:26:25.989 sys 0m1.120s 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.989 ************************************ 00:26:25.989 END TEST nvmf_shutdown_tc2 00:26:25.989 ************************************ 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:25.989 ************************************ 00:26:25.989 START TEST nvmf_shutdown_tc3 00:26:25.989 ************************************ 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:25.989 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:25.989 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:25.989 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.989 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:25.990 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:25.990 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:26.250 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:26.250 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:26.250 altname enp217s0f0np0 00:26:26.250 altname ens818f0np0 00:26:26.250 inet 192.168.100.8/24 scope global mlx_0_0 00:26:26.250 valid_lft forever preferred_lft forever 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:26.250 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:26.250 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:26.250 altname enp217s0f1np1 00:26:26.250 altname ens818f1np1 00:26:26.250 inet 192.168.100.9/24 scope global mlx_0_1 00:26:26.250 valid_lft forever preferred_lft forever 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.250 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:26.251 192.168.100.9' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:26.251 192.168.100.9' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:26.251 192.168.100.9' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2378818 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2378818 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2378818 ']' 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:26.251 15:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.251 [2024-11-03 15:45:03.994734] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:26.251 [2024-11-03 15:45:03.994783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.511 [2024-11-03 15:45:04.071753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.511 [2024-11-03 15:45:04.094080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.511 [2024-11-03 15:45:04.094118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.511 [2024-11-03 15:45:04.094128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.511 [2024-11-03 15:45:04.094139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.511 [2024-11-03 15:45:04.094146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.511 [2024-11-03 15:45:04.095787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.511 [2024-11-03 15:45:04.095877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.511 [2024-11-03 15:45:04.096011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.511 [2024-11-03 15:45:04.096012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.511 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.511 [2024-11-03 15:45:04.255894] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a0f50/0x5a5400) succeed. 00:26:26.511 [2024-11-03 15:45:04.265131] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a2590/0x5e6aa0) succeed. 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.770 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.771 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:26.771 Malloc1 00:26:26.771 [2024-11-03 15:45:04.504111] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:26.771 Malloc2 00:26:27.030 Malloc3 00:26:27.030 Malloc4 00:26:27.030 Malloc5 00:26:27.030 Malloc6 00:26:27.030 Malloc7 00:26:27.030 Malloc8 00:26:27.290 Malloc9 00:26:27.290 Malloc10 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2379360 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2379360 /var/tmp/bdevperf.sock 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2379360 ']' 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.290 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 [2024-11-03 15:45:04.997938] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:27.291 [2024-11-03 15:45:04.997997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379360 ] 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:27.291 { 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme$subsystem", 00:26:27.291 "trtype": "$TEST_TRANSPORT", 00:26:27.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "$NVMF_PORT", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.291 "hdgst": ${hdgst:-false}, 00:26:27.291 "ddgst": ${ddgst:-false} 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 } 00:26:27.291 EOF 00:26:27.291 )") 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:27.291 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme1", 00:26:27.291 "trtype": "rdma", 00:26:27.291 "traddr": "192.168.100.8", 00:26:27.291 "adrfam": "ipv4", 00:26:27.291 "trsvcid": "4420", 00:26:27.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.291 "hdgst": false, 00:26:27.291 "ddgst": false 00:26:27.291 }, 00:26:27.291 "method": "bdev_nvme_attach_controller" 00:26:27.291 },{ 00:26:27.291 "params": { 00:26:27.291 "name": "Nvme2", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme3", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme4", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme5", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme6", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme7", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme8", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme9", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 },{ 00:26:27.292 "params": { 00:26:27.292 "name": "Nvme10", 00:26:27.292 "trtype": "rdma", 00:26:27.292 "traddr": "192.168.100.8", 00:26:27.292 "adrfam": "ipv4", 00:26:27.292 "trsvcid": "4420", 00:26:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:27.292 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:27.292 "hdgst": false, 00:26:27.292 "ddgst": false 00:26:27.292 }, 00:26:27.292 "method": "bdev_nvme_attach_controller" 00:26:27.292 }' 00:26:27.292 [2024-11-03 15:45:05.078245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.551 [2024-11-03 15:45:05.100813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.489 Running I/O for 10 seconds... 00:26:28.489 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.489 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:26:28.489 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.489 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.489 15:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.489 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:28.748 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.748 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=27 00:26:28.748 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 27 -ge 100 ']' 00:26:28.748 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=179 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 179 -ge 100 ']' 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2378818 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2378818 ']' 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2378818 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2378818 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2378818' 00:26:29.008 killing process with pid 2378818 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2378818 00:26:29.008 15:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2378818 00:26:29.526 2806.00 IOPS, 175.38 MiB/s [2024-11-03T14:45:07.316Z] 15:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:30.097 [2024-11-03 15:45:07.855470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.855508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:0 sqhd:bc56 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.855521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.855531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:0 sqhd:bc56 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.855540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.855550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:0 sqhd:bc56 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.855559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.855568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:0 sqhd:bc56 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.857768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.097 [2024-11-03 15:45:07.857788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:30.097 [2024-11-03 15:45:07.857816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.857828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:3034 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.857838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.857847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:3034 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.857857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.857866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:3034 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.857875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.857884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:3034 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.859391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.097 [2024-11-03 15:45:07.859407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:30.097 [2024-11-03 15:45:07.859429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.859444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8eb0 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.859454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8eb0 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.859472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.859481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8eb0 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.859491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.859501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8eb0 p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.861572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.097 [2024-11-03 15:45:07.861586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:30.097 [2024-11-03 15:45:07.861604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.861614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:52fc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.861624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.861634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:52fc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.861643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.861652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:52fc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.861662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.861671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:52fc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.863490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.097 [2024-11-03 15:45:07.863532] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:30.097 [2024-11-03 15:45:07.863582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.863615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:84cc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.863648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.863679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:84cc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.863711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.863741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:84cc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.863781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.863812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:84cc p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.865523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.097 [2024-11-03 15:45:07.865565] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:30.097 [2024-11-03 15:45:07.865616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.865649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:edde p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.865682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.865712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:edde p:1 m:0 dnr:0 00:26:30.097 [2024-11-03 15:45:07.865745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.097 [2024-11-03 15:45:07.865775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:edde p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.865808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:edde p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.868324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.098 [2024-11-03 15:45:07.868367] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:30.098 [2024-11-03 15:45:07.868414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.868447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:604e p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.868479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.868511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:604e p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.868543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.868574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:604e p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.868606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.868636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:604e p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.871581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.098 [2024-11-03 15:45:07.871621] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:30.098 [2024-11-03 15:45:07.871670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.871704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:4d9c p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.871746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:4d9c p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.871808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.871838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:4d9c p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.871871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.871902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:4d9c p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.874343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.098 [2024-11-03 15:45:07.874385] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:30.098 [2024-11-03 15:45:07.874436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.874469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8dfe p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.874502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.874533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8dfe p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.874565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8dfe p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.874628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.874660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:8dfe p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.877001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.098 [2024-11-03 15:45:07.877042] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:30.098 [2024-11-03 15:45:07.877088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.877120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:7ad4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.877152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.877183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:7ad4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.877214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.877244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:7ad4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.877277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.098 [2024-11-03 15:45:07.877314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52029 cdw0:1 sqhd:7ad4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.879853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.098 [2024-11-03 15:45:07.879894] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:30.098 [2024-11-03 15:45:07.879945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.879989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x184700 00:26:30.098 [2024-11-03 15:45:07.880557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x184a00 00:26:30.098 [2024-11-03 15:45:07.880587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x184a00 00:26:30.098 [2024-11-03 15:45:07.880617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.098 [2024-11-03 15:45:07.880634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x184600 00:26:30.098 [2024-11-03 15:45:07.880647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.880977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.880991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x184600 00:26:30.099 [2024-11-03 15:45:07.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad50000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad71000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ad92000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000adb3000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d66f000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d64e000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d62d000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d60c000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ded0000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000def1000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df12000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.099 [2024-11-03 15:45:07.881766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df33000 len:0x10000 key:0x183a00 00:26:30.099 [2024-11-03 15:45:07.881779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df54000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df75000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df96000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb7000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03f000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01e000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.881976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.881995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cffd000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.882027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfdc000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.882058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfbb000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.882089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf9a000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.882121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf79000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.100 [2024-11-03 15:45:07.882152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf58000 len:0x10000 key:0x183a00 00:26:30.100 [2024-11-03 15:45:07.882165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:08d4 p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885645] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:30.371 [2024-11-03 15:45:07.885705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fbfe80 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fafe00 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f9fd80 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f8fd00 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f7fc80 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f6fc00 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f5fb80 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.885948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f4fb00 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.885961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f3fa80 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f2fa00 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f1f980 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000f0f900 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000eff880 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000eef800 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000edf780 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ecf700 len:0x10000 key:0x184a00 00:26:30.371 [2024-11-03 15:45:07.886267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.371 [2024-11-03 15:45:07.886284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ebf680 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000eaf600 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e9f580 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e8f500 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e7f480 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e6f400 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e5f380 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e4f300 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e3f280 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e2f200 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e1f180 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000e0f100 len:0x10000 key:0x184a00 00:26:30.372 [2024-11-03 15:45:07.886636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010011f0000 len:0x10000 key:0x181b00 00:26:30.372 [2024-11-03 15:45:07.886666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010011dff80 len:0x10000 key:0x181b00 00:26:30.372 [2024-11-03 15:45:07.886696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010011cff00 len:0x10000 key:0x181b00 00:26:30.372 [2024-11-03 15:45:07.886727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ac0000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ae1000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b02000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b23000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b44000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b65000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b86000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.886964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ba7000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.886984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009bc8000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009be9000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c0a000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c2b000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c4c000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c6d000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef50000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d83d000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d12000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d33000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d54000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d75000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.372 [2024-11-03 15:45:07.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d96000 len:0x10000 key:0x183a00 00:26:30.372 [2024-11-03 15:45:07.887397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009db7000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009dd8000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009df9000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e1a000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e3b000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e5c000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e7d000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009e9e000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e2cf000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b170000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97f000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.887768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95e000 len:0x10000 key:0x183a00 00:26:30.373 [2024-11-03 15:45:07.887781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:483e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892105] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:30.373 [2024-11-03 15:45:07.892144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100139fd80 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100138fd00 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100137fc80 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100136fc00 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100135fb80 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100134fb00 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100133fa80 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100132fa00 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100131f980 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100130f900 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012ff880 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012ef800 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012df780 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012cf700 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012bf680 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010012af600 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100129f580 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100128f500 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100127f480 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100126f400 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100125f380 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100124f300 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.373 [2024-11-03 15:45:07.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100123f280 len:0x10000 key:0x181e00 00:26:30.373 [2024-11-03 15:45:07.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.892879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100122f200 len:0x10000 key:0x181e00 00:26:30.374 [2024-11-03 15:45:07.892893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.892911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100121f180 len:0x10000 key:0x181e00 00:26:30.374 [2024-11-03 15:45:07.892924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.892942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100120f100 len:0x10000 key:0x181e00 00:26:30.374 [2024-11-03 15:45:07.892957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.892985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010015f0000 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.892999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010015dff80 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.893030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010015cff00 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.893061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010015bfe80 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.893092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010015afe00 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.893123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100159fd80 len:0x10000 key:0x182e00 00:26:30.374 [2024-11-03 15:45:07.893154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e710000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e731000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e752000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e773000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e794000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b5000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d6000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b590000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5b1000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5d2000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5f3000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b614000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b635000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b656000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b677000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bf000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa87000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa66000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa45000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa24000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa03000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9e2000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9c1000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9a0000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9f000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7e000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.893979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.374 [2024-11-03 15:45:07.893997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd5d000 len:0x10000 key:0x183a00 00:26:30.374 [2024-11-03 15:45:07.894013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.894031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd3c000 len:0x10000 key:0x183a00 00:26:30.375 [2024-11-03 15:45:07.894044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.894062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd1b000 len:0x10000 key:0x183a00 00:26:30.375 [2024-11-03 15:45:07.894075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.894093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcfa000 len:0x10000 key:0x183a00 00:26:30.375 [2024-11-03 15:45:07.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.894124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcd9000 len:0x10000 key:0x183a00 00:26:30.375 [2024-11-03 15:45:07.894138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.894156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcb8000 len:0x10000 key:0x183a00 00:26:30.375 [2024-11-03 15:45:07.894169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6b9e p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.896964] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:30.375 [2024-11-03 15:45:07.897074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100170f900 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016ff880 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016ef800 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016df780 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016cf700 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016bf680 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010016af600 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100169f580 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100168f500 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100167f480 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100166f400 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100165f380 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100164f300 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100163f280 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100162f200 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100161f180 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100160f100 len:0x10000 key:0x182b00 00:26:30.375 [2024-11-03 15:45:07.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019f0000 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019dff80 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019cff00 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019bfe80 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010019afe00 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100199fd80 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100198fd00 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100197fc80 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.375 [2024-11-03 15:45:07.897883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100196fc00 len:0x10000 key:0x180300 00:26:30.375 [2024-11-03 15:45:07.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.897914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100195fb80 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.897927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.897944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100194fb00 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.897959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.897983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100193fa80 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.897997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100192fa00 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.898028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100191f980 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.898058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100190f900 len:0x10000 key:0x180300 00:26:30.376 [2024-11-03 15:45:07.898090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e920000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e941000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e962000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e983000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9a4000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba97000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab8000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba76000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba55000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba34000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba13000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9f2000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9d1000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9b0000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d798000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d777000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db13000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daf2000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd8000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff9000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e01a000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e03b000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e05c000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e07d000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09e000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e500000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ffaf000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.898973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ff8e000 len:0x10000 key:0x183a00 00:26:30.376 [2024-11-03 15:45:07.898987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.376 [2024-11-03 15:45:07.899005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ff6d000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.899018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.899038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ff4c000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.899052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.899070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ff2b000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.899083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.899102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ff0a000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.899115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:f072 p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903090] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:30.377 [2024-11-03 15:45:07.903168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b5fb80 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b4fb00 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b3fa80 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b2fa00 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b1f980 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b0f900 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aff880 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aef800 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001adf780 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001acf700 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.903911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.903953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001abf680 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001aaf600 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a9f580 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a8f500 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a7f480 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a6f400 len:0x10000 key:0x183200 00:26:30.377 [2024-11-03 15:45:07.904213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1f000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfe000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecdd000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecbc000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec9b000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec7a000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec59000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec38000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec17000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf6000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd5000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb4000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb93000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb72000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb51000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.377 [2024-11-03 15:45:07.904712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb30000 len:0x10000 key:0x183a00 00:26:30.377 [2024-11-03 15:45:07.904726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fee9000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fec8000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fea7000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe86000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe65000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe44000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe23000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe02000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fde1000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bf000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019e000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001017d000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001015c000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001013b000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001011a000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f9000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000103cf000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000103ae000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001038d000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001036c000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001034b000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001032a000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010309000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102e8000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102c7000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102a6000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010285000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010264000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010243000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010222000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010201000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.905743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101e0000 len:0x10000 key:0x183a00 00:26:30.378 [2024-11-03 15:45:07.905758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:c29e p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.908586] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:30.378 [2024-11-03 15:45:07.908611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f3fa80 len:0x10000 key:0x183000 00:26:30.378 [2024-11-03 15:45:07.908624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.908651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f2fa00 len:0x10000 key:0x183000 00:26:30.378 [2024-11-03 15:45:07.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.908679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f1f980 len:0x10000 key:0x183000 00:26:30.378 [2024-11-03 15:45:07.908690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.378 [2024-11-03 15:45:07.908706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f0f900 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eff880 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eef800 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001edf780 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ecf700 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ebf680 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001eaf600 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e9f580 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e8f500 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e7f480 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.908974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e6f400 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.908986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e5f380 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e4f300 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e3f280 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e2f200 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e1f180 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001e0f100 len:0x10000 key:0x183000 00:26:30.379 [2024-11-03 15:45:07.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021f0000 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021dff80 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021cff00 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021bfe80 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010021afe00 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100219fd80 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100218fd00 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100217fc80 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100216fc00 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100215fb80 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100214fb00 len:0x10000 key:0x184100 00:26:30.379 [2024-11-03 15:45:07.909429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa59000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa7a000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa9b000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aabc000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aadd000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aafe000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ab1f000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c50000 len:0x10000 key:0x183a00 00:26:30.379 [2024-11-03 15:45:07.909641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.379 [2024-11-03 15:45:07.909657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c71000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c92000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008cb3000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008cd4000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008cf5000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008d16000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d546000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee48000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3df000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105df000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105be000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001059d000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.909987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001057c000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001055b000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001053a000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010519000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f8000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d7000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b6000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010495000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010474000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010453000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010432000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010411000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.910314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee27000 len:0x10000 key:0x183a00 00:26:30.380 [2024-11-03 15:45:07.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:1b32 p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.912879] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:30.380 [2024-11-03 15:45:07.912902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100231f980 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100230f900 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.912945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.912961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ff880 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.913011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022ef800 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.913037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022df780 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.913064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022cf700 len:0x10000 key:0x184000 00:26:30.380 [2024-11-03 15:45:07.913075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.380 [2024-11-03 15:45:07.913090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022bf680 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010022af600 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100229f580 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100228f500 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100227f480 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100226f400 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100225f380 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100224f300 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100223f280 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100222f200 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221f180 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220f100 len:0x10000 key:0x184000 00:26:30.381 [2024-11-03 15:45:07.913391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025f0000 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cff00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfe80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afe00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fd80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fd00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fc80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256fc00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255fb80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254fb00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253fa80 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252fa00 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f980 len:0x10000 key:0x184300 00:26:30.381 [2024-11-03 15:45:07.913763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f34f000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f32e000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f30d000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2ec000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2cb000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f2aa000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f289000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.913976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f268000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.913988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.914003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f247000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.914015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.914030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f226000 len:0x10000 key:0x183a00 00:26:30.381 [2024-11-03 15:45:07.914041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.381 [2024-11-03 15:45:07.914057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f205000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1e4000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1c3000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1a2000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f181000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7ff000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082e7000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082c6000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000082a5000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008284000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008263000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008242000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008221000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008200000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f436000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f415000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f4000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3d3000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3b2000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f391000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f370000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.914621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085ff000 len:0x10000 key:0x183a00 00:26:30.382 [2024-11-03 15:45:07.914632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:6d8c p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917329] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:30.382 [2024-11-03 15:45:07.917353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ff880 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026ef800 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026df780 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026cf700 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026bf680 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010026af600 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100269f580 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100268f500 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100267f480 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.382 [2024-11-03 15:45:07.917705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x184c00 00:26:30.382 [2024-11-03 15:45:07.917716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x184c00 00:26:30.383 [2024-11-03 15:45:07.917743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x184c00 00:26:30.383 [2024-11-03 15:45:07.917771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fd80 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fd00 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.917977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fc80 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.917989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296fc00 len:0x10000 key:0x183f00 00:26:30.383 [2024-11-03 15:45:07.918015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae37000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae58000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b7000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d8000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca30000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca51000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db76000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff5000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd4000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efb3000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef92000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef71000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3be000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c39d000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e101000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000880f000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087ee000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087cd000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087ac000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000878b000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000876a000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008749000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008728000 len:0x10000 key:0x183a00 00:26:30.383 [2024-11-03 15:45:07.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.383 [2024-11-03 15:45:07.918731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008707000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000086e6000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000086c5000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000086a4000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008683000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008662000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008641000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a1f000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089fe000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.918995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089dd000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.919007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.919025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000089bc000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.919037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.919053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000899b000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.919066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.919082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000897a000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.919094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.919112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008959000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.919124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:cfc6 p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.921887] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:30.384 [2024-11-03 15:45:07.921945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184d00 00:26:30.384 [2024-11-03 15:45:07.922594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x183c00 00:26:30.384 [2024-11-03 15:45:07.922622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x183c00 00:26:30.384 [2024-11-03 15:45:07.922649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f76f000 len:0x10000 key:0x183a00 00:26:30.384 [2024-11-03 15:45:07.922680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.384 [2024-11-03 15:45:07.922696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f74e000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f72d000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f70c000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f6eb000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f6ca000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f6a9000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f688000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f667000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f646000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.922958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f625000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.922996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f604000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5e3000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5c2000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5a1000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f580000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b8a000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b69000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b48000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b27000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b06000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ae5000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a82000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a61000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008a40000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d567000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc97000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc76000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc55000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc34000 len:0x10000 key:0x183a00 00:26:30.385 [2024-11-03 15:45:07.923762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.385 [2024-11-03 15:45:07.923779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc13000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fbf2000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fbd1000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fbb0000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008e3f000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008e1e000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008dfd000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.923965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.923987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ddc000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.924000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.924016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008dbb000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.924031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.924048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008d9a000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.924060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.924078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008d79000 len:0x10000 key:0x183a00 00:26:30.386 [2024-11-03 15:45:07.924090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:629a p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.926708] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:30.386 [2024-11-03 15:45:07.926766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.926799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.926853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.926930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.926962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184800 00:26:30.386 [2024-11-03 15:45:07.927411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184500 00:26:30.386 [2024-11-03 15:45:07.927754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.386 [2024-11-03 15:45:07.927770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.927977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.927990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184500 00:26:30.387 [2024-11-03 15:45:07.928302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.387 [2024-11-03 15:45:07.928801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184b00 00:26:30.387 [2024-11-03 15:45:07.928813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.388 [2024-11-03 15:45:07.928839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184b00 00:26:30.388 [2024-11-03 15:45:07.928851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.388 [2024-11-03 15:45:07.928866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184b00 00:26:30.388 [2024-11-03 15:45:07.928878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.388 [2024-11-03 15:45:07.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184800 00:26:30.388 [2024-11-03 15:45:07.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52029 cdw0:4d6d4000 sqhd:dd1c p:1 m:0 dnr:0 00:26:30.388 [2024-11-03 15:45:07.945809] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946018] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946072] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946117] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946164] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946207] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946252] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946295] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946340] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946370] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:30.388 [2024-11-03 15:45:07.946384] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:30.388 task offset: 38912 on job bdev=Nvme1n1 fails 00:26:30.388 00:26:30.388 Latency(us) 00:26:30.388 [2024-11-03T14:45:08.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.388 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme1n1 ended in about 1.92 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme1n1 : 1.92 145.82 9.11 33.33 0.00 354933.83 8074.04 1114007.14 00:26:30.388 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme2n1 ended in about 1.93 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme2n1 : 1.93 149.03 9.31 33.23 0.00 346284.76 8808.04 1107296.26 00:26:30.388 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme3n1 ended in about 1.93 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme3n1 : 1.93 149.06 9.32 33.12 0.00 343299.02 18559.80 1107296.26 00:26:30.388 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme4n1 ended in about 1.94 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme4n1 : 1.94 152.29 9.52 33.04 0.00 335111.78 4456.45 1107296.26 00:26:30.388 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme5n1 ended in about 1.94 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme5n1 : 1.94 139.94 8.75 32.93 0.00 356142.60 33135.00 1100585.37 00:26:30.388 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme6n1 ended in about 1.95 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme6n1 : 1.95 147.31 9.21 32.85 0.00 339412.70 38168.17 1107296.26 00:26:30.388 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme7n1 ended in about 1.95 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme7n1 : 1.95 147.50 9.22 32.78 0.00 336197.30 47185.92 1100585.37 00:26:30.388 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme8n1 ended in about 1.96 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme8n1 : 1.96 143.59 8.97 32.70 0.00 341286.87 53267.66 1093874.48 00:26:30.388 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme9n1 ended in about 1.96 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme9n1 : 1.96 138.64 8.66 32.62 0.00 348511.71 42152.76 1093874.48 00:26:30.388 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.388 Job: Nvme10n1 ended in about 1.97 seconds with error 00:26:30.388 Verification LBA range: start 0x0 length 0x400 00:26:30.388 Nvme10n1 : 1.97 130.16 8.14 32.54 0.00 363399.74 66689.43 1093874.48 00:26:30.388 [2024-11-03T14:45:08.178Z] =================================================================================================================== 00:26:30.388 [2024-11-03T14:45:08.178Z] Total : 1443.35 90.21 329.15 0.00 346216.08 4456.45 1114007.14 00:26:30.388 [2024-11-03 15:45:07.973143] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:30.388 [2024-11-03 15:45:07.973169] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973183] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973195] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973204] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973304] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973316] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973327] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973336] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.973356] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:30.388 [2024-11-03 15:45:07.989578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.989639] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.989667] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:26:30.388 [2024-11-03 15:45:07.989783] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.989819] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.989843] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5280 00:26:30.388 [2024-11-03 15:45:07.989954] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.989998] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.990023] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ba2c0 00:26:30.388 [2024-11-03 15:45:07.990182] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.990216] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.990240] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168b9ac0 00:26:30.388 [2024-11-03 15:45:07.990461] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.990498] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.990522] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688e300 00:26:30.388 [2024-11-03 15:45:07.990658] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.990692] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.990717] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf2c0 00:26:30.388 [2024-11-03 15:45:07.990835] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.990870] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.990895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688d2c0 00:26:30.388 [2024-11-03 15:45:07.991032] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.991068] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.991093] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688dac0 00:26:30.388 [2024-11-03 15:45:07.991239] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.991275] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.388 [2024-11-03 15:45:07.991301] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168dc080 00:26:30.388 [2024-11-03 15:45:07.991422] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.388 [2024-11-03 15:45:07.991457] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.389 [2024-11-03 15:45:07.991486] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bd300 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2379360 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2379360 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.648 15:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2379360 00:26:31.216 [2024-11-03 15:45:08.994077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:08.994099] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:08.995415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:08.995457] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:08.997028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:08.997070] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:08.998782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:08.998822] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:09.000481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:09.000523] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:09.001925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:09.001976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:09.003450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:09.003490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:31.216 [2024-11-03 15:45:09.004987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.216 [2024-11-03 15:45:09.005004] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:31.476 [2024-11-03 15:45:09.006248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.476 [2024-11-03 15:45:09.006289] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:31.476 [2024-11-03 15:45:09.007748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.476 [2024-11-03 15:45:09.007788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:31.476 [2024-11-03 15:45:09.007815] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:31.476 [2024-11-03 15:45:09.007844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:31.476 [2024-11-03 15:45:09.007884] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:31.476 [2024-11-03 15:45:09.007903] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:31.476 [2024-11-03 15:45:09.007916] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:31.476 [2024-11-03 15:45:09.007928] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:26:31.476 [2024-11-03 15:45:09.007944] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:31.476 [2024-11-03 15:45:09.007959] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:31.476 [2024-11-03 15:45:09.007983] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:26:31.476 [2024-11-03 15:45:09.007998] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:31.476 [2024-11-03 15:45:09.008010] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:31.476 [2024-11-03 15:45:09.008022] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:26:31.476 [2024-11-03 15:45:09.008111] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:31.476 [2024-11-03 15:45:09.008130] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:31.476 [2024-11-03 15:45:09.008145] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:31.476 [2024-11-03 15:45:09.008160] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:31.476 [2024-11-03 15:45:09.008175] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008198] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008213] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008225] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008237] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008252] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008264] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008275] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008290] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008302] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008314] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008329] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008340] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008351] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:31.477 [2024-11-03 15:45:09.008379] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:31.477 [2024-11-03 15:45:09.008390] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:26:31.477 [2024-11-03 15:45:09.008466] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:31.477 [2024-11-03 15:45:09.008484] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:31.477 [2024-11-03 15:45:09.008504] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:31.477 [2024-11-03 15:45:09.008519] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:31.477 [2024-11-03 15:45:09.008545] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:31.477 [2024-11-03 15:45:09.008561] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:31.477 rmmod nvme_rdma 00:26:31.477 rmmod nvme_fabrics 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2378818 ']' 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2378818 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2378818 ']' 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2378818 00:26:31.477 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2378818) - No such process 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2378818 is not found' 00:26:31.477 Process with pid 2378818 is not found 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:31.477 00:26:31.477 real 0m5.553s 00:26:31.477 user 0m16.006s 00:26:31.477 sys 0m1.370s 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:31.477 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:31.477 ************************************ 00:26:31.477 END TEST nvmf_shutdown_tc3 00:26:31.477 ************************************ 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:31.738 ************************************ 00:26:31.738 START TEST nvmf_shutdown_tc4 00:26:31.738 ************************************ 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:31.738 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:31.738 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.738 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:31.739 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:31.739 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:31.739 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:31.739 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:31.739 altname enp217s0f0np0 00:26:31.739 altname ens818f0np0 00:26:31.739 inet 192.168.100.8/24 scope global mlx_0_0 00:26:31.739 valid_lft forever preferred_lft forever 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:31.739 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:31.739 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:31.739 altname enp217s0f1np1 00:26:31.739 altname ens818f1np1 00:26:31.739 inet 192.168.100.9/24 scope global mlx_0_1 00:26:31.739 valid_lft forever preferred_lft forever 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:31.739 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.007 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:32.007 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:32.008 192.168.100.9' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:32.008 192.168.100.9' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:32.008 192.168.100.9' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:32.008 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2380405 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2380405 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2380405 ']' 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.009 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.009 [2024-11-03 15:45:09.656510] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:32.009 [2024-11-03 15:45:09.656560] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.009 [2024-11-03 15:45:09.734098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.009 [2024-11-03 15:45:09.756396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.009 [2024-11-03 15:45:09.756435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.009 [2024-11-03 15:45:09.756445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.009 [2024-11-03 15:45:09.756453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.009 [2024-11-03 15:45:09.756460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.009 [2024-11-03 15:45:09.758176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.009 [2024-11-03 15:45:09.758263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.009 [2024-11-03 15:45:09.758389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.009 [2024-11-03 15:45:09.758390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.273 15:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.273 [2024-11-03 15:45:09.918427] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b6af50/0x1b6f400) succeed. 00:26:32.273 [2024-11-03 15:45:09.927596] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b6c590/0x1bb0aa0) succeed. 00:26:32.273 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.273 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:32.273 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:32.273 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.273 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.532 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:32.532 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.532 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.532 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.533 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:32.533 Malloc1 00:26:32.533 [2024-11-03 15:45:10.161881] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:32.533 Malloc2 00:26:32.533 Malloc3 00:26:32.533 Malloc4 00:26:32.533 Malloc5 00:26:32.792 Malloc6 00:26:32.792 Malloc7 00:26:32.792 Malloc8 00:26:32.792 Malloc9 00:26:32.792 Malloc10 00:26:32.792 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.792 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:32.792 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.792 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:33.052 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2380472 00:26:33.052 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:33.052 15:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:26:33.052 [2024-11-03 15:45:10.690153] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2380405 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2380405 ']' 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2380405 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2380405 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:38.325 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:38.326 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2380405' 00:26:38.326 killing process with pid 2380405 00:26:38.326 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2380405 00:26:38.326 15:45:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2380405 00:26:38.326 NVMe io qpair process completion error 00:26:38.326 NVMe io qpair process completion error 00:26:38.326 NVMe io qpair process completion error 00:26:38.326 NVMe io qpair process completion error 00:26:38.326 NVMe io qpair process completion error 00:26:38.585 15:45:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:39.154 [2024-11-03 15:45:16.754762] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:26:39.154 [2024-11-03 15:45:16.755999] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:26:39.154 [2024-11-03 15:45:16.756051] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:26:39.154 [2024-11-03 15:45:16.757254] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:39.154 NVMe io qpair process completion error 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.154 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.155 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 NVMe io qpair process completion error 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 NVMe io qpair process completion error 00:26:39.156 NVMe io qpair process completion error 00:26:39.156 NVMe io qpair process completion error 00:26:39.156 NVMe io qpair process completion error 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.156 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.157 Write completed with error (sct=0, sc=8) 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2380472 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2380472 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:39.725 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2380472 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.984 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 [2024-11-03 15:45:17.763797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:39.985 [2024-11-03 15:45:17.763868] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:39.985 Write completed with error (sct=0, sc=8) 00:26:40.247 [2024-11-03 15:45:17.775029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.775097] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:40.247 [2024-11-03 15:45:17.777416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.777462] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:40.247 [2024-11-03 15:45:17.779802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.779843] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:40.247 [2024-11-03 15:45:17.782283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.782332] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:40.247 [2024-11-03 15:45:17.784899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.784939] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 [2024-11-03 15:45:17.787264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 [2024-11-03 15:45:17.787304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 [2024-11-03 15:45:17.789599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.789640] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 [2024-11-03 15:45:17.791741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.247 [2024-11-03 15:45:17.791783] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.247 Write completed with error (sct=0, sc=8) 00:26:40.248 [2024-11-03 15:45:17.794341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:40.248 [2024-11-03 15:45:17.794381] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.248 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.249 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Write completed with error (sct=0, sc=8) 00:26:40.250 Initializing NVMe Controllers 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.250 Controller IO queue size 128, less than required. 00:26:40.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.251 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:26:40.251 Controller IO queue size 128, less than required. 00:26:40.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.251 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:26:40.251 Controller IO queue size 128, less than required. 00:26:40.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.251 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:26:40.251 Controller IO queue size 128, less than required. 00:26:40.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.251 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:26:40.251 Controller IO queue size 128, less than required. 00:26:40.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:40.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:40.251 Initialization complete. Launching workers. 00:26:40.251 ======================================================== 00:26:40.251 Latency(us) 00:26:40.251 Device Information : IOPS MiB/s Average min max 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1536.79 66.03 84050.83 113.45 1248551.60 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1542.73 66.29 97039.97 113.38 2173087.07 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1530.69 65.77 83602.99 103.53 1212004.12 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1529.50 65.72 83755.05 113.22 1208789.49 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1562.23 67.13 95998.54 118.07 2175351.08 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1519.32 65.28 84401.13 115.10 1224472.72 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1550.53 66.62 96810.39 113.56 2184884.63 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1536.79 66.03 97793.04 113.43 2208771.84 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1595.13 68.54 94305.65 107.04 2051027.01 00:26:40.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1564.43 67.22 96244.83 113.41 2174186.76 00:26:40.251 ======================================================== 00:26:40.251 Total : 15468.14 664.65 91448.97 103.53 2208771.84 00:26:40.251 00:26:40.251 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:40.251 rmmod nvme_rdma 00:26:40.251 rmmod nvme_fabrics 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2380405 ']' 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2380405 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2380405 ']' 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2380405 00:26:40.251 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2380405) - No such process 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2380405 is not found' 00:26:40.251 Process with pid 2380405 is not found 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:40.251 00:26:40.251 real 0m8.617s 00:26:40.251 user 0m32.197s 00:26:40.251 sys 0m1.282s 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:40.251 ************************************ 00:26:40.251 END TEST nvmf_shutdown_tc4 00:26:40.251 ************************************ 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:40.251 00:26:40.251 real 0m32.405s 00:26:40.251 user 1m35.860s 00:26:40.251 sys 0m10.333s 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.251 15:45:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.251 ************************************ 00:26:40.251 END TEST nvmf_shutdown 00:26:40.251 ************************************ 00:26:40.251 15:45:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:40.251 00:26:40.251 real 15m20.099s 00:26:40.251 user 47m15.586s 00:26:40.251 sys 3m9.965s 00:26:40.251 15:45:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.251 15:45:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:40.251 ************************************ 00:26:40.251 END TEST nvmf_target_extra 00:26:40.251 ************************************ 00:26:40.511 15:45:18 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:40.511 15:45:18 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:40.511 15:45:18 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:40.511 15:45:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:40.511 ************************************ 00:26:40.511 START TEST nvmf_host 00:26:40.511 ************************************ 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:26:40.511 * Looking for test storage... 00:26:40.511 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.511 --rc genhtml_branch_coverage=1 00:26:40.511 --rc genhtml_function_coverage=1 00:26:40.511 --rc genhtml_legend=1 00:26:40.511 --rc geninfo_all_blocks=1 00:26:40.511 --rc geninfo_unexecuted_blocks=1 00:26:40.511 00:26:40.511 ' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.511 --rc genhtml_branch_coverage=1 00:26:40.511 --rc genhtml_function_coverage=1 00:26:40.511 --rc genhtml_legend=1 00:26:40.511 --rc geninfo_all_blocks=1 00:26:40.511 --rc geninfo_unexecuted_blocks=1 00:26:40.511 00:26:40.511 ' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.511 --rc genhtml_branch_coverage=1 00:26:40.511 --rc genhtml_function_coverage=1 00:26:40.511 --rc genhtml_legend=1 00:26:40.511 --rc geninfo_all_blocks=1 00:26:40.511 --rc geninfo_unexecuted_blocks=1 00:26:40.511 00:26:40.511 ' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.511 --rc genhtml_branch_coverage=1 00:26:40.511 --rc genhtml_function_coverage=1 00:26:40.511 --rc genhtml_legend=1 00:26:40.511 --rc geninfo_all_blocks=1 00:26:40.511 --rc geninfo_unexecuted_blocks=1 00:26:40.511 00:26:40.511 ' 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.511 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:40.771 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.772 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.772 ************************************ 00:26:40.772 START TEST nvmf_multicontroller 00:26:40.772 ************************************ 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:26:40.772 * Looking for test storage... 00:26:40.772 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.772 --rc genhtml_branch_coverage=1 00:26:40.772 --rc genhtml_function_coverage=1 00:26:40.772 --rc genhtml_legend=1 00:26:40.772 --rc geninfo_all_blocks=1 00:26:40.772 --rc geninfo_unexecuted_blocks=1 00:26:40.772 00:26:40.772 ' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.772 --rc genhtml_branch_coverage=1 00:26:40.772 --rc genhtml_function_coverage=1 00:26:40.772 --rc genhtml_legend=1 00:26:40.772 --rc geninfo_all_blocks=1 00:26:40.772 --rc geninfo_unexecuted_blocks=1 00:26:40.772 00:26:40.772 ' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.772 --rc genhtml_branch_coverage=1 00:26:40.772 --rc genhtml_function_coverage=1 00:26:40.772 --rc genhtml_legend=1 00:26:40.772 --rc geninfo_all_blocks=1 00:26:40.772 --rc geninfo_unexecuted_blocks=1 00:26:40.772 00:26:40.772 ' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:40.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.772 --rc genhtml_branch_coverage=1 00:26:40.772 --rc genhtml_function_coverage=1 00:26:40.772 --rc genhtml_legend=1 00:26:40.772 --rc geninfo_all_blocks=1 00:26:40.772 --rc geninfo_unexecuted_blocks=1 00:26:40.772 00:26:40.772 ' 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:40.772 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.773 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:26:40.773 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:26:40.773 00:26:40.773 real 0m0.174s 00:26:40.773 user 0m0.101s 00:26:40.773 sys 0m0.083s 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.773 15:45:18 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.773 ************************************ 00:26:40.773 END TEST nvmf_multicontroller 00:26:40.773 ************************************ 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.032 ************************************ 00:26:41.032 START TEST nvmf_aer 00:26:41.032 ************************************ 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:26:41.032 * Looking for test storage... 00:26:41.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:41.032 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:41.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.033 --rc genhtml_branch_coverage=1 00:26:41.033 --rc genhtml_function_coverage=1 00:26:41.033 --rc genhtml_legend=1 00:26:41.033 --rc geninfo_all_blocks=1 00:26:41.033 --rc geninfo_unexecuted_blocks=1 00:26:41.033 00:26:41.033 ' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:41.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.033 --rc genhtml_branch_coverage=1 00:26:41.033 --rc genhtml_function_coverage=1 00:26:41.033 --rc genhtml_legend=1 00:26:41.033 --rc geninfo_all_blocks=1 00:26:41.033 --rc geninfo_unexecuted_blocks=1 00:26:41.033 00:26:41.033 ' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:41.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.033 --rc genhtml_branch_coverage=1 00:26:41.033 --rc genhtml_function_coverage=1 00:26:41.033 --rc genhtml_legend=1 00:26:41.033 --rc geninfo_all_blocks=1 00:26:41.033 --rc geninfo_unexecuted_blocks=1 00:26:41.033 00:26:41.033 ' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:41.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.033 --rc genhtml_branch_coverage=1 00:26:41.033 --rc genhtml_function_coverage=1 00:26:41.033 --rc genhtml_legend=1 00:26:41.033 --rc geninfo_all_blocks=1 00:26:41.033 --rc geninfo_unexecuted_blocks=1 00:26:41.033 00:26:41.033 ' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:41.033 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.292 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.292 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.292 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.292 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.292 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.293 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.293 15:45:18 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:48.015 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:48.015 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:48.016 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:48.016 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:48.016 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:48.016 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.016 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:48.016 altname enp217s0f0np0 00:26:48.016 altname ens818f0np0 00:26:48.016 inet 192.168.100.8/24 scope global mlx_0_0 00:26:48.016 valid_lft forever preferred_lft forever 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:48.016 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.016 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:48.016 altname enp217s0f1np1 00:26:48.016 altname ens818f1np1 00:26:48.016 inet 192.168.100.9/24 scope global mlx_0_1 00:26:48.016 valid_lft forever preferred_lft forever 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:26:48.016 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:48.017 192.168.100.9' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:48.017 192.168.100.9' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:48.017 192.168.100.9' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2385252 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2385252 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2385252 ']' 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.017 [2024-11-03 15:45:25.436682] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:48.017 [2024-11-03 15:45:25.436732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.017 [2024-11-03 15:45:25.514423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.017 [2024-11-03 15:45:25.536580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.017 [2024-11-03 15:45:25.536621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.017 [2024-11-03 15:45:25.536631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.017 [2024-11-03 15:45:25.536639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.017 [2024-11-03 15:45:25.536662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.017 [2024-11-03 15:45:25.538253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.017 [2024-11-03 15:45:25.538350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.017 [2024-11-03 15:45:25.538440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.017 [2024-11-03 15:45:25.538442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.017 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.017 [2024-11-03 15:45:25.707587] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17ebc50/0x17f0100) succeed. 00:26:48.017 [2024-11-03 15:45:25.716715] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17ed290/0x18317a0) succeed. 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.276 Malloc0 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.276 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.277 [2024-11-03 15:45:25.904207] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.277 [ 00:26:48.277 { 00:26:48.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:48.277 "subtype": "Discovery", 00:26:48.277 "listen_addresses": [], 00:26:48.277 "allow_any_host": true, 00:26:48.277 "hosts": [] 00:26:48.277 }, 00:26:48.277 { 00:26:48.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.277 "subtype": "NVMe", 00:26:48.277 "listen_addresses": [ 00:26:48.277 { 00:26:48.277 "trtype": "RDMA", 00:26:48.277 "adrfam": "IPv4", 00:26:48.277 "traddr": "192.168.100.8", 00:26:48.277 "trsvcid": "4420" 00:26:48.277 } 00:26:48.277 ], 00:26:48.277 "allow_any_host": true, 00:26:48.277 "hosts": [], 00:26:48.277 "serial_number": "SPDK00000000000001", 00:26:48.277 "model_number": "SPDK bdev Controller", 00:26:48.277 "max_namespaces": 2, 00:26:48.277 "min_cntlid": 1, 00:26:48.277 "max_cntlid": 65519, 00:26:48.277 "namespaces": [ 00:26:48.277 { 00:26:48.277 "nsid": 1, 00:26:48.277 "bdev_name": "Malloc0", 00:26:48.277 "name": "Malloc0", 00:26:48.277 "nguid": "9545964881F940D8AC06A0B8F0F772A4", 00:26:48.277 "uuid": "95459648-81f9-40d8-ac06-a0b8f0f772a4" 00:26:48.277 } 00:26:48.277 ] 00:26:48.277 } 00:26:48.277 ] 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2385371 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:26:48.277 15:45:25 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:26:48.277 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.277 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:26:48.277 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:26:48.277 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.536 Malloc1 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.536 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.536 [ 00:26:48.536 { 00:26:48.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:48.536 "subtype": "Discovery", 00:26:48.536 "listen_addresses": [], 00:26:48.536 "allow_any_host": true, 00:26:48.536 "hosts": [] 00:26:48.536 }, 00:26:48.536 { 00:26:48.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.536 "subtype": "NVMe", 00:26:48.536 "listen_addresses": [ 00:26:48.536 { 00:26:48.536 "trtype": "RDMA", 00:26:48.536 "adrfam": "IPv4", 00:26:48.536 "traddr": "192.168.100.8", 00:26:48.536 "trsvcid": "4420" 00:26:48.536 } 00:26:48.536 ], 00:26:48.536 "allow_any_host": true, 00:26:48.536 "hosts": [], 00:26:48.536 "serial_number": "SPDK00000000000001", 00:26:48.536 "model_number": "SPDK bdev Controller", 00:26:48.536 "max_namespaces": 2, 00:26:48.536 "min_cntlid": 1, 00:26:48.536 "max_cntlid": 65519, 00:26:48.536 "namespaces": [ 00:26:48.536 { 00:26:48.536 "nsid": 1, 00:26:48.536 "bdev_name": "Malloc0", 00:26:48.536 "name": "Malloc0", 00:26:48.537 "nguid": "9545964881F940D8AC06A0B8F0F772A4", 00:26:48.537 "uuid": "95459648-81f9-40d8-ac06-a0b8f0f772a4" 00:26:48.537 }, 00:26:48.537 { 00:26:48.537 "nsid": 2, 00:26:48.537 "bdev_name": "Malloc1", 00:26:48.537 "name": "Malloc1", 00:26:48.537 "nguid": "94D5F88B99494FD8B1BFC9C076ADC6AE", 00:26:48.537 "uuid": "94d5f88b-9949-4fd8-b1bf-c9c076adc6ae" 00:26:48.537 } 00:26:48.537 ] 00:26:48.537 } 00:26:48.537 ] 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2385371 00:26:48.537 Asynchronous Event Request test 00:26:48.537 Attaching to 192.168.100.8 00:26:48.537 Attached to 192.168.100.8 00:26:48.537 Registering asynchronous event callbacks... 00:26:48.537 Starting namespace attribute notice tests for all controllers... 00:26:48.537 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:48.537 aer_cb - Changed Namespace 00:26:48.537 Cleaning up... 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.537 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:48.537 rmmod nvme_rdma 00:26:48.796 rmmod nvme_fabrics 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2385252 ']' 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2385252 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2385252 ']' 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2385252 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2385252 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2385252' 00:26:48.796 killing process with pid 2385252 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2385252 00:26:48.796 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2385252 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:49.055 00:26:49.055 real 0m8.035s 00:26:49.055 user 0m6.137s 00:26:49.055 sys 0m5.574s 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.055 ************************************ 00:26:49.055 END TEST nvmf_aer 00:26:49.055 ************************************ 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.055 ************************************ 00:26:49.055 START TEST nvmf_async_init 00:26:49.055 ************************************ 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:26:49.055 * Looking for test storage... 00:26:49.055 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:26:49.055 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:49.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.315 --rc genhtml_branch_coverage=1 00:26:49.315 --rc genhtml_function_coverage=1 00:26:49.315 --rc genhtml_legend=1 00:26:49.315 --rc geninfo_all_blocks=1 00:26:49.315 --rc geninfo_unexecuted_blocks=1 00:26:49.315 00:26:49.315 ' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:49.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.315 --rc genhtml_branch_coverage=1 00:26:49.315 --rc genhtml_function_coverage=1 00:26:49.315 --rc genhtml_legend=1 00:26:49.315 --rc geninfo_all_blocks=1 00:26:49.315 --rc geninfo_unexecuted_blocks=1 00:26:49.315 00:26:49.315 ' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:49.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.315 --rc genhtml_branch_coverage=1 00:26:49.315 --rc genhtml_function_coverage=1 00:26:49.315 --rc genhtml_legend=1 00:26:49.315 --rc geninfo_all_blocks=1 00:26:49.315 --rc geninfo_unexecuted_blocks=1 00:26:49.315 00:26:49.315 ' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:49.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.315 --rc genhtml_branch_coverage=1 00:26:49.315 --rc genhtml_function_coverage=1 00:26:49.315 --rc genhtml_legend=1 00:26:49.315 --rc geninfo_all_blocks=1 00:26:49.315 --rc geninfo_unexecuted_blocks=1 00:26:49.315 00:26:49.315 ' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.315 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1fc2abbb06a04509a20dd951a7c1303b 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.316 15:45:26 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:55.886 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:55.886 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:55.886 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:55.886 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:55.886 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:55.887 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:55.887 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:55.887 altname enp217s0f0np0 00:26:55.887 altname ens818f0np0 00:26:55.887 inet 192.168.100.8/24 scope global mlx_0_0 00:26:55.887 valid_lft forever preferred_lft forever 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:55.887 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:55.887 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:55.887 altname enp217s0f1np1 00:26:55.887 altname ens818f1np1 00:26:55.887 inet 192.168.100.9/24 scope global mlx_0_1 00:26:55.887 valid_lft forever preferred_lft forever 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:55.887 192.168.100.9' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:55.887 192.168.100.9' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:55.887 192.168.100.9' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:55.887 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2388792 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2388792 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2388792 ']' 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.147 [2024-11-03 15:45:33.730493] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:26:56.147 [2024-11-03 15:45:33.730541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.147 [2024-11-03 15:45:33.805998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.147 [2024-11-03 15:45:33.827366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.147 [2024-11-03 15:45:33.827404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.147 [2024-11-03 15:45:33.827415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.147 [2024-11-03 15:45:33.827423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.147 [2024-11-03 15:45:33.827447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.147 [2024-11-03 15:45:33.828065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.147 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.407 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.407 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:56.407 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.407 15:45:33 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 [2024-11-03 15:45:33.990271] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x172dc50/0x1732100) succeed. 00:26:56.408 [2024-11-03 15:45:33.998949] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x172f0b0/0x17737a0) succeed. 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 null0 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1fc2abbb06a04509a20dd951a7c1303b 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 [2024-11-03 15:45:34.073508] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 nvme0n1 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 [ 00:26:56.408 { 00:26:56.408 "name": "nvme0n1", 00:26:56.408 "aliases": [ 00:26:56.408 "1fc2abbb-06a0-4509-a20d-d951a7c1303b" 00:26:56.408 ], 00:26:56.408 "product_name": "NVMe disk", 00:26:56.408 "block_size": 512, 00:26:56.408 "num_blocks": 2097152, 00:26:56.408 "uuid": "1fc2abbb-06a0-4509-a20d-d951a7c1303b", 00:26:56.408 "numa_id": 1, 00:26:56.408 "assigned_rate_limits": { 00:26:56.408 "rw_ios_per_sec": 0, 00:26:56.408 "rw_mbytes_per_sec": 0, 00:26:56.408 "r_mbytes_per_sec": 0, 00:26:56.408 "w_mbytes_per_sec": 0 00:26:56.408 }, 00:26:56.408 "claimed": false, 00:26:56.408 "zoned": false, 00:26:56.408 "supported_io_types": { 00:26:56.408 "read": true, 00:26:56.408 "write": true, 00:26:56.408 "unmap": false, 00:26:56.408 "flush": true, 00:26:56.408 "reset": true, 00:26:56.408 "nvme_admin": true, 00:26:56.408 "nvme_io": true, 00:26:56.408 "nvme_io_md": false, 00:26:56.408 "write_zeroes": true, 00:26:56.408 "zcopy": false, 00:26:56.408 "get_zone_info": false, 00:26:56.408 "zone_management": false, 00:26:56.408 "zone_append": false, 00:26:56.408 "compare": true, 00:26:56.408 "compare_and_write": true, 00:26:56.408 "abort": true, 00:26:56.408 "seek_hole": false, 00:26:56.408 "seek_data": false, 00:26:56.408 "copy": true, 00:26:56.408 "nvme_iov_md": false 00:26:56.408 }, 00:26:56.408 "memory_domains": [ 00:26:56.408 { 00:26:56.408 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.408 "dma_device_type": 0 00:26:56.408 } 00:26:56.408 ], 00:26:56.408 "driver_specific": { 00:26:56.408 "nvme": [ 00:26:56.408 { 00:26:56.408 "trid": { 00:26:56.408 "trtype": "RDMA", 00:26:56.408 "adrfam": "IPv4", 00:26:56.408 "traddr": "192.168.100.8", 00:26:56.408 "trsvcid": "4420", 00:26:56.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.408 }, 00:26:56.408 "ctrlr_data": { 00:26:56.408 "cntlid": 1, 00:26:56.408 "vendor_id": "0x8086", 00:26:56.408 "model_number": "SPDK bdev Controller", 00:26:56.408 "serial_number": "00000000000000000000", 00:26:56.408 "firmware_revision": "25.01", 00:26:56.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.408 "oacs": { 00:26:56.408 "security": 0, 00:26:56.408 "format": 0, 00:26:56.408 "firmware": 0, 00:26:56.408 "ns_manage": 0 00:26:56.408 }, 00:26:56.408 "multi_ctrlr": true, 00:26:56.408 "ana_reporting": false 00:26:56.408 }, 00:26:56.408 "vs": { 00:26:56.408 "nvme_version": "1.3" 00:26:56.408 }, 00:26:56.408 "ns_data": { 00:26:56.408 "id": 1, 00:26:56.408 "can_share": true 00:26:56.408 } 00:26:56.408 } 00:26:56.408 ], 00:26:56.408 "mp_policy": "active_passive" 00:26:56.408 } 00:26:56.408 } 00:26:56.408 ] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.408 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.408 [2024-11-03 15:45:34.189840] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:56.668 [2024-11-03 15:45:34.210071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.668 [2024-11-03 15:45:34.235343] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.668 [ 00:26:56.668 { 00:26:56.668 "name": "nvme0n1", 00:26:56.668 "aliases": [ 00:26:56.668 "1fc2abbb-06a0-4509-a20d-d951a7c1303b" 00:26:56.668 ], 00:26:56.668 "product_name": "NVMe disk", 00:26:56.668 "block_size": 512, 00:26:56.668 "num_blocks": 2097152, 00:26:56.668 "uuid": "1fc2abbb-06a0-4509-a20d-d951a7c1303b", 00:26:56.668 "numa_id": 1, 00:26:56.668 "assigned_rate_limits": { 00:26:56.668 "rw_ios_per_sec": 0, 00:26:56.668 "rw_mbytes_per_sec": 0, 00:26:56.668 "r_mbytes_per_sec": 0, 00:26:56.668 "w_mbytes_per_sec": 0 00:26:56.668 }, 00:26:56.668 "claimed": false, 00:26:56.668 "zoned": false, 00:26:56.668 "supported_io_types": { 00:26:56.668 "read": true, 00:26:56.668 "write": true, 00:26:56.668 "unmap": false, 00:26:56.668 "flush": true, 00:26:56.668 "reset": true, 00:26:56.668 "nvme_admin": true, 00:26:56.668 "nvme_io": true, 00:26:56.668 "nvme_io_md": false, 00:26:56.668 "write_zeroes": true, 00:26:56.668 "zcopy": false, 00:26:56.668 "get_zone_info": false, 00:26:56.668 "zone_management": false, 00:26:56.668 "zone_append": false, 00:26:56.668 "compare": true, 00:26:56.668 "compare_and_write": true, 00:26:56.668 "abort": true, 00:26:56.668 "seek_hole": false, 00:26:56.668 "seek_data": false, 00:26:56.668 "copy": true, 00:26:56.668 "nvme_iov_md": false 00:26:56.668 }, 00:26:56.668 "memory_domains": [ 00:26:56.668 { 00:26:56.668 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.668 "dma_device_type": 0 00:26:56.668 } 00:26:56.668 ], 00:26:56.668 "driver_specific": { 00:26:56.668 "nvme": [ 00:26:56.668 { 00:26:56.668 "trid": { 00:26:56.668 "trtype": "RDMA", 00:26:56.668 "adrfam": "IPv4", 00:26:56.668 "traddr": "192.168.100.8", 00:26:56.668 "trsvcid": "4420", 00:26:56.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.668 }, 00:26:56.668 "ctrlr_data": { 00:26:56.668 "cntlid": 2, 00:26:56.668 "vendor_id": "0x8086", 00:26:56.668 "model_number": "SPDK bdev Controller", 00:26:56.668 "serial_number": "00000000000000000000", 00:26:56.668 "firmware_revision": "25.01", 00:26:56.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.668 "oacs": { 00:26:56.668 "security": 0, 00:26:56.668 "format": 0, 00:26:56.668 "firmware": 0, 00:26:56.668 "ns_manage": 0 00:26:56.668 }, 00:26:56.668 "multi_ctrlr": true, 00:26:56.668 "ana_reporting": false 00:26:56.668 }, 00:26:56.668 "vs": { 00:26:56.668 "nvme_version": "1.3" 00:26:56.668 }, 00:26:56.668 "ns_data": { 00:26:56.668 "id": 1, 00:26:56.668 "can_share": true 00:26:56.668 } 00:26:56.668 } 00:26:56.668 ], 00:26:56.668 "mp_policy": "active_passive" 00:26:56.668 } 00:26:56.668 } 00:26:56.668 ] 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wssbpnivqs 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wssbpnivqs 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.wssbpnivqs 00:26:56.668 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 [2024-11-03 15:45:34.325946] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 [2024-11-03 15:45:34.350013] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:56.669 nvme0n1 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.669 [ 00:26:56.669 { 00:26:56.669 "name": "nvme0n1", 00:26:56.669 "aliases": [ 00:26:56.669 "1fc2abbb-06a0-4509-a20d-d951a7c1303b" 00:26:56.669 ], 00:26:56.669 "product_name": "NVMe disk", 00:26:56.669 "block_size": 512, 00:26:56.669 "num_blocks": 2097152, 00:26:56.669 "uuid": "1fc2abbb-06a0-4509-a20d-d951a7c1303b", 00:26:56.669 "numa_id": 1, 00:26:56.669 "assigned_rate_limits": { 00:26:56.669 "rw_ios_per_sec": 0, 00:26:56.669 "rw_mbytes_per_sec": 0, 00:26:56.669 "r_mbytes_per_sec": 0, 00:26:56.669 "w_mbytes_per_sec": 0 00:26:56.669 }, 00:26:56.669 "claimed": false, 00:26:56.669 "zoned": false, 00:26:56.669 "supported_io_types": { 00:26:56.669 "read": true, 00:26:56.669 "write": true, 00:26:56.669 "unmap": false, 00:26:56.669 "flush": true, 00:26:56.669 "reset": true, 00:26:56.669 "nvme_admin": true, 00:26:56.669 "nvme_io": true, 00:26:56.669 "nvme_io_md": false, 00:26:56.669 "write_zeroes": true, 00:26:56.669 "zcopy": false, 00:26:56.669 "get_zone_info": false, 00:26:56.669 "zone_management": false, 00:26:56.669 "zone_append": false, 00:26:56.669 "compare": true, 00:26:56.669 "compare_and_write": true, 00:26:56.669 "abort": true, 00:26:56.669 "seek_hole": false, 00:26:56.669 "seek_data": false, 00:26:56.669 "copy": true, 00:26:56.669 "nvme_iov_md": false 00:26:56.669 }, 00:26:56.669 "memory_domains": [ 00:26:56.669 { 00:26:56.669 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:26:56.669 "dma_device_type": 0 00:26:56.669 } 00:26:56.669 ], 00:26:56.669 "driver_specific": { 00:26:56.669 "nvme": [ 00:26:56.669 { 00:26:56.669 "trid": { 00:26:56.669 "trtype": "RDMA", 00:26:56.669 "adrfam": "IPv4", 00:26:56.669 "traddr": "192.168.100.8", 00:26:56.669 "trsvcid": "4421", 00:26:56.669 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:56.669 }, 00:26:56.669 "ctrlr_data": { 00:26:56.669 "cntlid": 3, 00:26:56.669 "vendor_id": "0x8086", 00:26:56.669 "model_number": "SPDK bdev Controller", 00:26:56.669 "serial_number": "00000000000000000000", 00:26:56.669 "firmware_revision": "25.01", 00:26:56.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.669 "oacs": { 00:26:56.669 "security": 0, 00:26:56.669 "format": 0, 00:26:56.669 "firmware": 0, 00:26:56.669 "ns_manage": 0 00:26:56.669 }, 00:26:56.669 "multi_ctrlr": true, 00:26:56.669 "ana_reporting": false 00:26:56.669 }, 00:26:56.669 "vs": { 00:26:56.669 "nvme_version": "1.3" 00:26:56.669 }, 00:26:56.669 "ns_data": { 00:26:56.669 "id": 1, 00:26:56.669 "can_share": true 00:26:56.669 } 00:26:56.669 } 00:26:56.669 ], 00:26:56.669 "mp_policy": "active_passive" 00:26:56.669 } 00:26:56.669 } 00:26:56.669 ] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.669 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.wssbpnivqs 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:56.928 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:56.929 rmmod nvme_rdma 00:26:56.929 rmmod nvme_fabrics 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2388792 ']' 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2388792 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2388792 ']' 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2388792 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2388792 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2388792' 00:26:56.929 killing process with pid 2388792 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2388792 00:26:56.929 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2388792 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:57.188 00:26:57.188 real 0m8.051s 00:26:57.188 user 0m3.129s 00:26:57.188 sys 0m5.541s 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:57.188 ************************************ 00:26:57.188 END TEST nvmf_async_init 00:26:57.188 ************************************ 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.188 ************************************ 00:26:57.188 START TEST dma 00:26:57.188 ************************************ 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:26:57.188 * Looking for test storage... 00:26:57.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:26:57.188 15:45:34 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.448 --rc genhtml_branch_coverage=1 00:26:57.448 --rc genhtml_function_coverage=1 00:26:57.448 --rc genhtml_legend=1 00:26:57.448 --rc geninfo_all_blocks=1 00:26:57.448 --rc geninfo_unexecuted_blocks=1 00:26:57.448 00:26:57.448 ' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.448 --rc genhtml_branch_coverage=1 00:26:57.448 --rc genhtml_function_coverage=1 00:26:57.448 --rc genhtml_legend=1 00:26:57.448 --rc geninfo_all_blocks=1 00:26:57.448 --rc geninfo_unexecuted_blocks=1 00:26:57.448 00:26:57.448 ' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.448 --rc genhtml_branch_coverage=1 00:26:57.448 --rc genhtml_function_coverage=1 00:26:57.448 --rc genhtml_legend=1 00:26:57.448 --rc geninfo_all_blocks=1 00:26:57.448 --rc geninfo_unexecuted_blocks=1 00:26:57.448 00:26:57.448 ' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:57.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.448 --rc genhtml_branch_coverage=1 00:26:57.448 --rc genhtml_function_coverage=1 00:26:57.448 --rc genhtml_legend=1 00:26:57.448 --rc geninfo_all_blocks=1 00:26:57.448 --rc geninfo_unexecuted_blocks=1 00:26:57.448 00:26:57.448 ' 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:57.448 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.449 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.449 15:45:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:05.574 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:05.574 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:05.574 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:05.574 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:05.574 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:05.575 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.575 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:05.575 altname enp217s0f0np0 00:27:05.575 altname ens818f0np0 00:27:05.575 inet 192.168.100.8/24 scope global mlx_0_0 00:27:05.575 valid_lft forever preferred_lft forever 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:05.575 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.575 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:05.575 altname enp217s0f1np1 00:27:05.575 altname ens818f1np1 00:27:05.575 inet 192.168.100.9/24 scope global mlx_0_1 00:27:05.575 valid_lft forever preferred_lft forever 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:27:05.575 15:45:41 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:05.575 192.168.100.9' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:05.575 192.168.100.9' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:05.575 192.168.100.9' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=2392253 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 2392253 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@833 -- # '[' -z 2392253 ']' 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 [2024-11-03 15:45:42.167388] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:05.575 [2024-11-03 15:45:42.167440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.575 [2024-11-03 15:45:42.244562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.575 [2024-11-03 15:45:42.265882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.575 [2024-11-03 15:45:42.265932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.575 [2024-11-03 15:45:42.265941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.575 [2024-11-03 15:45:42.265949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.575 [2024-11-03 15:45:42.265975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.575 [2024-11-03 15:45:42.267253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.575 [2024-11-03 15:45:42.267256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@866 -- # return 0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 [2024-11-03 15:45:42.429646] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14c6590/0x14caa40) succeed. 00:27:05.575 [2024-11-03 15:45:42.438510] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14c7a90/0x150c0e0) succeed. 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 Malloc0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.575 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:05.575 [2024-11-03 15:45:42.589878] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:05.576 { 00:27:05.576 "params": { 00:27:05.576 "name": "Nvme$subsystem", 00:27:05.576 "trtype": "$TEST_TRANSPORT", 00:27:05.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.576 "adrfam": "ipv4", 00:27:05.576 "trsvcid": "$NVMF_PORT", 00:27:05.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.576 "hdgst": ${hdgst:-false}, 00:27:05.576 "ddgst": ${ddgst:-false} 00:27:05.576 }, 00:27:05.576 "method": "bdev_nvme_attach_controller" 00:27:05.576 } 00:27:05.576 EOF 00:27:05.576 )") 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:27:05.576 15:45:42 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:05.576 "params": { 00:27:05.576 "name": "Nvme0", 00:27:05.576 "trtype": "rdma", 00:27:05.576 "traddr": "192.168.100.8", 00:27:05.576 "adrfam": "ipv4", 00:27:05.576 "trsvcid": "4420", 00:27:05.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.576 "hdgst": false, 00:27:05.576 "ddgst": false 00:27:05.576 }, 00:27:05.576 "method": "bdev_nvme_attach_controller" 00:27:05.576 }' 00:27:05.576 [2024-11-03 15:45:42.640045] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:05.576 [2024-11-03 15:45:42.640100] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392279 ] 00:27:05.576 [2024-11-03 15:45:42.713248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.576 [2024-11-03 15:45:42.737141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.576 [2024-11-03 15:45:42.737144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.849 bdev Nvme0n1 reports 1 memory domains 00:27:10.849 bdev Nvme0n1 supports RDMA memory domain 00:27:10.849 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:10.849 ========================================================================== 00:27:10.849 Latency [us] 00:27:10.849 IOPS MiB/s Average min max 00:27:10.849 Core 2: 21891.74 85.51 730.02 257.56 8653.96 00:27:10.849 Core 3: 22106.05 86.35 722.98 243.21 8598.78 00:27:10.849 ========================================================================== 00:27:10.849 Total : 43997.79 171.87 726.48 243.21 8653.96 00:27:10.849 00:27:10.849 Total operations: 220074, translate 220074 pull_push 0 memzero 0 00:27:10.849 15:45:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:27:10.849 15:45:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:27:10.849 15:45:48 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:27:10.849 [2024-11-03 15:45:48.147746] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:10.849 [2024-11-03 15:45:48.147802] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393272 ] 00:27:10.849 [2024-11-03 15:45:48.222368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:10.849 [2024-11-03 15:45:48.245656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.850 [2024-11-03 15:45:48.245660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.123 bdev Malloc0 reports 2 memory domains 00:27:16.123 bdev Malloc0 doesn't support RDMA memory domain 00:27:16.123 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:16.123 ========================================================================== 00:27:16.123 Latency [us] 00:27:16.123 IOPS MiB/s Average min max 00:27:16.123 Core 2: 14548.29 56.83 1099.07 447.70 1777.72 00:27:16.123 Core 3: 14634.86 57.17 1092.58 457.42 1921.36 00:27:16.123 ========================================================================== 00:27:16.123 Total : 29183.15 114.00 1095.82 447.70 1921.36 00:27:16.123 00:27:16.123 Total operations: 145971, translate 0 pull_push 583884 memzero 0 00:27:16.123 15:45:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:27:16.123 15:45:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:27:16.123 15:45:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:16.123 15:45:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:16.123 Ignoring -M option 00:27:16.123 [2024-11-03 15:45:53.553343] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:16.123 [2024-11-03 15:45:53.553399] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394137 ] 00:27:16.123 [2024-11-03 15:45:53.626060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:16.123 [2024-11-03 15:45:53.647397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.123 [2024-11-03 15:45:53.647400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.396 bdev 3c8294d7-c0c7-4f63-8e60-2380a8ce2040 reports 1 memory domains 00:27:21.396 bdev 3c8294d7-c0c7-4f63-8e60-2380a8ce2040 supports RDMA memory domain 00:27:21.396 Initialization complete, running randread IO for 5 sec on 2 cores 00:27:21.396 ========================================================================== 00:27:21.396 Latency [us] 00:27:21.396 IOPS MiB/s Average min max 00:27:21.396 Core 2: 64977.58 253.82 245.21 79.09 3552.45 00:27:21.396 Core 3: 67911.82 265.28 234.61 75.92 2070.10 00:27:21.396 ========================================================================== 00:27:21.396 Total : 132889.40 519.10 239.79 75.92 3552.45 00:27:21.396 00:27:21.396 Total operations: 664529, translate 0 pull_push 0 memzero 664529 00:27:21.396 15:45:59 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:27:21.396 [2024-11-03 15:45:59.181593] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:23.930 Initializing NVMe Controllers 00:27:23.930 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:27:23.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:23.930 Initialization complete. Launching workers. 00:27:23.930 ======================================================== 00:27:23.930 Latency(us) 00:27:23.930 Device Information : IOPS MiB/s Average min max 00:27:23.930 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7958.38 4997.38 10964.52 00:27:23.930 ======================================================== 00:27:23.930 Total : 2016.00 7.88 7958.38 4997.38 10964.52 00:27:23.930 00:27:23.930 15:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:27:23.930 15:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:27:23.930 15:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:23.930 15:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:23.930 [2024-11-03 15:46:01.522345] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:23.930 [2024-11-03 15:46:01.522400] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395469 ] 00:27:23.930 [2024-11-03 15:46:01.596494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:23.930 [2024-11-03 15:46:01.620114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.930 [2024-11-03 15:46:01.620117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.500 bdev a3bb858a-2ae2-4b44-8d38-069697b970be reports 1 memory domains 00:27:30.500 bdev a3bb858a-2ae2-4b44-8d38-069697b970be supports RDMA memory domain 00:27:30.500 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:30.500 ========================================================================== 00:27:30.500 Latency [us] 00:27:30.500 IOPS MiB/s Average min max 00:27:30.500 Core 2: 18997.21 74.21 841.59 28.93 13304.75 00:27:30.500 Core 3: 19349.78 75.59 826.23 12.55 13532.62 00:27:30.500 ========================================================================== 00:27:30.500 Total : 38347.00 149.79 833.84 12.55 13532.62 00:27:30.500 00:27:30.500 Total operations: 191753, translate 191648 pull_push 0 memzero 105 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:30.500 rmmod nvme_rdma 00:27:30.500 rmmod nvme_fabrics 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 2392253 ']' 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 2392253 00:27:30.500 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@952 -- # '[' -z 2392253 ']' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # kill -0 2392253 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # uname 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2392253 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2392253' 00:27:30.501 killing process with pid 2392253 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@971 -- # kill 2392253 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@976 -- # wait 2392253 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:30.501 00:27:30.501 real 0m32.581s 00:27:30.501 user 1m34.775s 00:27:30.501 sys 0m6.586s 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:30.501 ************************************ 00:27:30.501 END TEST dma 00:27:30.501 ************************************ 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.501 ************************************ 00:27:30.501 START TEST nvmf_identify 00:27:30.501 ************************************ 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:30.501 * Looking for test storage... 00:27:30.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:30.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.501 --rc genhtml_branch_coverage=1 00:27:30.501 --rc genhtml_function_coverage=1 00:27:30.501 --rc genhtml_legend=1 00:27:30.501 --rc geninfo_all_blocks=1 00:27:30.501 --rc geninfo_unexecuted_blocks=1 00:27:30.501 00:27:30.501 ' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:30.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.501 --rc genhtml_branch_coverage=1 00:27:30.501 --rc genhtml_function_coverage=1 00:27:30.501 --rc genhtml_legend=1 00:27:30.501 --rc geninfo_all_blocks=1 00:27:30.501 --rc geninfo_unexecuted_blocks=1 00:27:30.501 00:27:30.501 ' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:30.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.501 --rc genhtml_branch_coverage=1 00:27:30.501 --rc genhtml_function_coverage=1 00:27:30.501 --rc genhtml_legend=1 00:27:30.501 --rc geninfo_all_blocks=1 00:27:30.501 --rc geninfo_unexecuted_blocks=1 00:27:30.501 00:27:30.501 ' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:30.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.501 --rc genhtml_branch_coverage=1 00:27:30.501 --rc genhtml_function_coverage=1 00:27:30.501 --rc genhtml_legend=1 00:27:30.501 --rc geninfo_all_blocks=1 00:27:30.501 --rc geninfo_unexecuted_blocks=1 00:27:30.501 00:27:30.501 ' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.501 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.502 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.502 15:46:07 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:37.076 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:37.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:37.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:37.076 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.076 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:37.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:37.077 altname enp217s0f0np0 00:27:37.077 altname ens818f0np0 00:27:37.077 inet 192.168.100.8/24 scope global mlx_0_0 00:27:37.077 valid_lft forever preferred_lft forever 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:37.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.077 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:37.077 altname enp217s0f1np1 00:27:37.077 altname ens818f1np1 00:27:37.077 inet 192.168.100.9/24 scope global mlx_0_1 00:27:37.077 valid_lft forever preferred_lft forever 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:37.077 192.168.100.9' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:37.077 192.168.100.9' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:37.077 192.168.100.9' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2399693 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2399693 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2399693 ']' 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:37.077 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.078 [2024-11-03 15:46:14.591638] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:37.078 [2024-11-03 15:46:14.591687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.078 [2024-11-03 15:46:14.668890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.078 [2024-11-03 15:46:14.692480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.078 [2024-11-03 15:46:14.692521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.078 [2024-11-03 15:46:14.692532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.078 [2024-11-03 15:46:14.692540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.078 [2024-11-03 15:46:14.692547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.078 [2024-11-03 15:46:14.694119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.078 [2024-11-03 15:46:14.694149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.078 [2024-11-03 15:46:14.694247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.078 [2024-11-03 15:46:14.694248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.078 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.078 [2024-11-03 15:46:14.818963] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23f6c50/0x23fb100) succeed. 00:27:37.078 [2024-11-03 15:46:14.828145] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23f8290/0x243c7a0) succeed. 00:27:37.337 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:37.337 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.337 15:46:14 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 Malloc0 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 [2024-11-03 15:46:15.062382] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.337 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.337 [ 00:27:37.337 { 00:27:37.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:37.337 "subtype": "Discovery", 00:27:37.337 "listen_addresses": [ 00:27:37.337 { 00:27:37.337 "trtype": "RDMA", 00:27:37.337 "adrfam": "IPv4", 00:27:37.337 "traddr": "192.168.100.8", 00:27:37.337 "trsvcid": "4420" 00:27:37.337 } 00:27:37.337 ], 00:27:37.337 "allow_any_host": true, 00:27:37.337 "hosts": [] 00:27:37.337 }, 00:27:37.337 { 00:27:37.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.337 "subtype": "NVMe", 00:27:37.337 "listen_addresses": [ 00:27:37.338 { 00:27:37.338 "trtype": "RDMA", 00:27:37.338 "adrfam": "IPv4", 00:27:37.338 "traddr": "192.168.100.8", 00:27:37.338 "trsvcid": "4420" 00:27:37.338 } 00:27:37.338 ], 00:27:37.338 "allow_any_host": true, 00:27:37.338 "hosts": [], 00:27:37.338 "serial_number": "SPDK00000000000001", 00:27:37.338 "model_number": "SPDK bdev Controller", 00:27:37.338 "max_namespaces": 32, 00:27:37.338 "min_cntlid": 1, 00:27:37.338 "max_cntlid": 65519, 00:27:37.338 "namespaces": [ 00:27:37.338 { 00:27:37.338 "nsid": 1, 00:27:37.338 "bdev_name": "Malloc0", 00:27:37.338 "name": "Malloc0", 00:27:37.338 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:37.338 "eui64": "ABCDEF0123456789", 00:27:37.338 "uuid": "5797f25b-04f3-4bd7-8819-792dc3dc671a" 00:27:37.338 } 00:27:37.338 ] 00:27:37.338 } 00:27:37.338 ] 00:27:37.338 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.338 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:37.338 [2024-11-03 15:46:15.120549] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:37.338 [2024-11-03 15:46:15.120587] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399736 ] 00:27:37.604 [2024-11-03 15:46:15.181155] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:37.604 [2024-11-03 15:46:15.181231] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:37.604 [2024-11-03 15:46:15.181247] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:37.604 [2024-11-03 15:46:15.181252] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:37.604 [2024-11-03 15:46:15.181284] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:37.604 [2024-11-03 15:46:15.202504] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:37.604 [2024-11-03 15:46:15.212635] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:37.604 [2024-11-03 15:46:15.212646] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:37.604 [2024-11-03 15:46:15.212652] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212660] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212666] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212672] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212678] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212684] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212690] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212696] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212702] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212708] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212715] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212721] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212727] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212733] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212739] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212747] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212754] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212760] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212766] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212772] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212778] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212784] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212790] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212796] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212802] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212808] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212814] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212820] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212826] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212832] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212838] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212844] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:37.604 [2024-11-03 15:46:15.212849] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:37.604 [2024-11-03 15:46:15.212854] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:37.604 [2024-11-03 15:46:15.212870] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.212883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x183000 00:27:37.604 [2024-11-03 15:46:15.217972] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.604 [2024-11-03 15:46:15.217982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.604 [2024-11-03 15:46:15.217990] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.217997] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:37.604 [2024-11-03 15:46:15.218004] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:37.604 [2024-11-03 15:46:15.218011] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:37.604 [2024-11-03 15:46:15.218025] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.218034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.604 [2024-11-03 15:46:15.218057] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.604 [2024-11-03 15:46:15.218063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:37.604 [2024-11-03 15:46:15.218072] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:37.604 [2024-11-03 15:46:15.218078] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.218085] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:37.604 [2024-11-03 15:46:15.218092] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.218100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.604 [2024-11-03 15:46:15.218119] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.604 [2024-11-03 15:46:15.218125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:37.604 [2024-11-03 15:46:15.218131] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:37.604 [2024-11-03 15:46:15.218137] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.218144] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:37.604 [2024-11-03 15:46:15.218152] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.604 [2024-11-03 15:46:15.218159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218182] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218194] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:37.605 [2024-11-03 15:46:15.218200] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218208] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218231] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218243] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:37.605 [2024-11-03 15:46:15.218249] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:37.605 [2024-11-03 15:46:15.218255] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218261] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:37.605 [2024-11-03 15:46:15.218368] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:37.605 [2024-11-03 15:46:15.218374] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:37.605 [2024-11-03 15:46:15.218383] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218409] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218421] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:37.605 [2024-11-03 15:46:15.218426] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218461] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218473] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:37.605 [2024-11-03 15:46:15.218479] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218485] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218492] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:37.605 [2024-11-03 15:46:15.218500] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218510] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183000 00:27:37.605 [2024-11-03 15:46:15.218558] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218572] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:37.605 [2024-11-03 15:46:15.218578] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:37.605 [2024-11-03 15:46:15.218584] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:37.605 [2024-11-03 15:46:15.218590] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:37.605 [2024-11-03 15:46:15.218596] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:37.605 [2024-11-03 15:46:15.218602] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218607] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218617] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218629] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218654] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218668] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.605 [2024-11-03 15:46:15.218681] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.605 [2024-11-03 15:46:15.218695] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.605 [2024-11-03 15:46:15.218709] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.605 [2024-11-03 15:46:15.218721] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218727] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218737] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:37.605 [2024-11-03 15:46:15.218745] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.605 [2024-11-03 15:46:15.218771] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218783] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:37.605 [2024-11-03 15:46:15.218789] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:37.605 [2024-11-03 15:46:15.218795] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218803] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183000 00:27:37.605 [2024-11-03 15:46:15.218839] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218852] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218862] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:37.605 [2024-11-03 15:46:15.218886] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183000 00:27:37.605 [2024-11-03 15:46:15.218902] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.605 [2024-11-03 15:46:15.218924] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218941] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183000 00:27:37.605 [2024-11-03 15:46:15.218954] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218960] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218976] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.605 [2024-11-03 15:46:15.218982] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.605 [2024-11-03 15:46:15.218988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:37.605 [2024-11-03 15:46:15.218997] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.606 [2024-11-03 15:46:15.219005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183000 00:27:37.606 [2024-11-03 15:46:15.219011] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.606 [2024-11-03 15:46:15.219029] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.606 [2024-11-03 15:46:15.219034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:37.606 [2024-11-03 15:46:15.219045] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.606 ===================================================== 00:27:37.606 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:37.606 ===================================================== 00:27:37.606 Controller Capabilities/Features 00:27:37.606 ================================ 00:27:37.606 Vendor ID: 0000 00:27:37.606 Subsystem Vendor ID: 0000 00:27:37.606 Serial Number: .................... 00:27:37.606 Model Number: ........................................ 00:27:37.606 Firmware Version: 25.01 00:27:37.606 Recommended Arb Burst: 0 00:27:37.606 IEEE OUI Identifier: 00 00 00 00:27:37.606 Multi-path I/O 00:27:37.606 May have multiple subsystem ports: No 00:27:37.606 May have multiple controllers: No 00:27:37.606 Associated with SR-IOV VF: No 00:27:37.606 Max Data Transfer Size: 131072 00:27:37.606 Max Number of Namespaces: 0 00:27:37.606 Max Number of I/O Queues: 1024 00:27:37.606 NVMe Specification Version (VS): 1.3 00:27:37.606 NVMe Specification Version (Identify): 1.3 00:27:37.606 Maximum Queue Entries: 128 00:27:37.606 Contiguous Queues Required: Yes 00:27:37.606 Arbitration Mechanisms Supported 00:27:37.606 Weighted Round Robin: Not Supported 00:27:37.606 Vendor Specific: Not Supported 00:27:37.606 Reset Timeout: 15000 ms 00:27:37.606 Doorbell Stride: 4 bytes 00:27:37.606 NVM Subsystem Reset: Not Supported 00:27:37.606 Command Sets Supported 00:27:37.606 NVM Command Set: Supported 00:27:37.606 Boot Partition: Not Supported 00:27:37.606 Memory Page Size Minimum: 4096 bytes 00:27:37.606 Memory Page Size Maximum: 4096 bytes 00:27:37.606 Persistent Memory Region: Not Supported 00:27:37.606 Optional Asynchronous Events Supported 00:27:37.606 Namespace Attribute Notices: Not Supported 00:27:37.606 Firmware Activation Notices: Not Supported 00:27:37.606 ANA Change Notices: Not Supported 00:27:37.606 PLE Aggregate Log Change Notices: Not Supported 00:27:37.606 LBA Status Info Alert Notices: Not Supported 00:27:37.606 EGE Aggregate Log Change Notices: Not Supported 00:27:37.606 Normal NVM Subsystem Shutdown event: Not Supported 00:27:37.606 Zone Descriptor Change Notices: Not Supported 00:27:37.606 Discovery Log Change Notices: Supported 00:27:37.606 Controller Attributes 00:27:37.606 128-bit Host Identifier: Not Supported 00:27:37.606 Non-Operational Permissive Mode: Not Supported 00:27:37.606 NVM Sets: Not Supported 00:27:37.606 Read Recovery Levels: Not Supported 00:27:37.606 Endurance Groups: Not Supported 00:27:37.606 Predictable Latency Mode: Not Supported 00:27:37.606 Traffic Based Keep ALive: Not Supported 00:27:37.606 Namespace Granularity: Not Supported 00:27:37.606 SQ Associations: Not Supported 00:27:37.606 UUID List: Not Supported 00:27:37.606 Multi-Domain Subsystem: Not Supported 00:27:37.606 Fixed Capacity Management: Not Supported 00:27:37.606 Variable Capacity Management: Not Supported 00:27:37.606 Delete Endurance Group: Not Supported 00:27:37.606 Delete NVM Set: Not Supported 00:27:37.606 Extended LBA Formats Supported: Not Supported 00:27:37.606 Flexible Data Placement Supported: Not Supported 00:27:37.606 00:27:37.606 Controller Memory Buffer Support 00:27:37.606 ================================ 00:27:37.606 Supported: No 00:27:37.606 00:27:37.606 Persistent Memory Region Support 00:27:37.606 ================================ 00:27:37.606 Supported: No 00:27:37.606 00:27:37.606 Admin Command Set Attributes 00:27:37.606 ============================ 00:27:37.606 Security Send/Receive: Not Supported 00:27:37.606 Format NVM: Not Supported 00:27:37.606 Firmware Activate/Download: Not Supported 00:27:37.606 Namespace Management: Not Supported 00:27:37.606 Device Self-Test: Not Supported 00:27:37.606 Directives: Not Supported 00:27:37.606 NVMe-MI: Not Supported 00:27:37.606 Virtualization Management: Not Supported 00:27:37.606 Doorbell Buffer Config: Not Supported 00:27:37.606 Get LBA Status Capability: Not Supported 00:27:37.606 Command & Feature Lockdown Capability: Not Supported 00:27:37.606 Abort Command Limit: 1 00:27:37.606 Async Event Request Limit: 4 00:27:37.606 Number of Firmware Slots: N/A 00:27:37.606 Firmware Slot 1 Read-Only: N/A 00:27:37.606 Firmware Activation Without Reset: N/A 00:27:37.606 Multiple Update Detection Support: N/A 00:27:37.606 Firmware Update Granularity: No Information Provided 00:27:37.606 Per-Namespace SMART Log: No 00:27:37.606 Asymmetric Namespace Access Log Page: Not Supported 00:27:37.606 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:37.606 Command Effects Log Page: Not Supported 00:27:37.606 Get Log Page Extended Data: Supported 00:27:37.606 Telemetry Log Pages: Not Supported 00:27:37.606 Persistent Event Log Pages: Not Supported 00:27:37.606 Supported Log Pages Log Page: May Support 00:27:37.606 Commands Supported & Effects Log Page: Not Supported 00:27:37.606 Feature Identifiers & Effects Log Page:May Support 00:27:37.606 NVMe-MI Commands & Effects Log Page: May Support 00:27:37.606 Data Area 4 for Telemetry Log: Not Supported 00:27:37.606 Error Log Page Entries Supported: 128 00:27:37.606 Keep Alive: Not Supported 00:27:37.606 00:27:37.606 NVM Command Set Attributes 00:27:37.606 ========================== 00:27:37.606 Submission Queue Entry Size 00:27:37.606 Max: 1 00:27:37.606 Min: 1 00:27:37.606 Completion Queue Entry Size 00:27:37.606 Max: 1 00:27:37.606 Min: 1 00:27:37.606 Number of Namespaces: 0 00:27:37.606 Compare Command: Not Supported 00:27:37.606 Write Uncorrectable Command: Not Supported 00:27:37.606 Dataset Management Command: Not Supported 00:27:37.606 Write Zeroes Command: Not Supported 00:27:37.606 Set Features Save Field: Not Supported 00:27:37.606 Reservations: Not Supported 00:27:37.606 Timestamp: Not Supported 00:27:37.606 Copy: Not Supported 00:27:37.606 Volatile Write Cache: Not Present 00:27:37.606 Atomic Write Unit (Normal): 1 00:27:37.606 Atomic Write Unit (PFail): 1 00:27:37.606 Atomic Compare & Write Unit: 1 00:27:37.606 Fused Compare & Write: Supported 00:27:37.606 Scatter-Gather List 00:27:37.606 SGL Command Set: Supported 00:27:37.606 SGL Keyed: Supported 00:27:37.606 SGL Bit Bucket Descriptor: Not Supported 00:27:37.606 SGL Metadata Pointer: Not Supported 00:27:37.606 Oversized SGL: Not Supported 00:27:37.606 SGL Metadata Address: Not Supported 00:27:37.606 SGL Offset: Supported 00:27:37.606 Transport SGL Data Block: Not Supported 00:27:37.606 Replay Protected Memory Block: Not Supported 00:27:37.606 00:27:37.606 Firmware Slot Information 00:27:37.606 ========================= 00:27:37.606 Active slot: 0 00:27:37.606 00:27:37.606 00:27:37.606 Error Log 00:27:37.606 ========= 00:27:37.606 00:27:37.606 Active Namespaces 00:27:37.606 ================= 00:27:37.606 Discovery Log Page 00:27:37.606 ================== 00:27:37.606 Generation Counter: 2 00:27:37.606 Number of Records: 2 00:27:37.606 Record Format: 0 00:27:37.606 00:27:37.606 Discovery Log Entry 0 00:27:37.606 ---------------------- 00:27:37.606 Transport Type: 1 (RDMA) 00:27:37.606 Address Family: 1 (IPv4) 00:27:37.606 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:37.606 Entry Flags: 00:27:37.606 Duplicate Returned Information: 1 00:27:37.606 Explicit Persistent Connection Support for Discovery: 1 00:27:37.606 Transport Requirements: 00:27:37.606 Secure Channel: Not Required 00:27:37.606 Port ID: 0 (0x0000) 00:27:37.606 Controller ID: 65535 (0xffff) 00:27:37.606 Admin Max SQ Size: 128 00:27:37.606 Transport Service Identifier: 4420 00:27:37.606 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:37.606 Transport Address: 192.168.100.8 00:27:37.606 Transport Specific Address Subtype - RDMA 00:27:37.606 RDMA QP Service Type: 1 (Reliable Connected) 00:27:37.606 RDMA Provider Type: 1 (No provider specified) 00:27:37.606 RDMA CM Service: 1 (RDMA_CM) 00:27:37.606 Discovery Log Entry 1 00:27:37.606 ---------------------- 00:27:37.606 Transport Type: 1 (RDMA) 00:27:37.606 Address Family: 1 (IPv4) 00:27:37.606 Subsystem Type: 2 (NVM Subsystem) 00:27:37.606 Entry Flags: 00:27:37.606 Duplicate Returned Information: 0 00:27:37.606 Explicit Persistent Connection Support for Discovery: 0 00:27:37.606 Transport Requirements: 00:27:37.606 Secure Channel: Not Required 00:27:37.606 Port ID: 0 (0x0000) 00:27:37.606 Controller ID: 65535 (0xffff) 00:27:37.607 Admin Max SQ Size: [2024-11-03 15:46:15.219116] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:37.607 [2024-11-03 15:46:15.219125] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23259 doesn't match qid 00:27:37.607 [2024-11-03 15:46:15.219140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32512 cdw0:467cc7d0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219146] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23259 doesn't match qid 00:27:37.607 [2024-11-03 15:46:15.219154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32512 cdw0:467cc7d0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219161] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23259 doesn't match qid 00:27:37.607 [2024-11-03 15:46:15.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32512 cdw0:467cc7d0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219175] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 23259 doesn't match qid 00:27:37.607 [2024-11-03 15:46:15.219184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32512 cdw0:467cc7d0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219192] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219218] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219232] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219246] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219264] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219276] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:37.607 [2024-11-03 15:46:15.219282] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:37.607 [2024-11-03 15:46:15.219288] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219296] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219321] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219333] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219342] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219371] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219383] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219392] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219415] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219427] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219436] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219464] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219476] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219484] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219513] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219525] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219534] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219558] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219570] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219578] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219606] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219618] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219626] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219654] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219665] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219674] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219697] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219709] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219719] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219746] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219758] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219766] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219791] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219803] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219811] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219835] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219846] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219855] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219878] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:37.607 [2024-11-03 15:46:15.219890] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219898] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.607 [2024-11-03 15:46:15.219906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.607 [2024-11-03 15:46:15.219925] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.607 [2024-11-03 15:46:15.219931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.219937] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.219945] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.219953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.219975] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.219981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.219987] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.219997] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220019] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220031] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220039] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220064] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220076] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220084] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220113] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220125] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220133] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220160] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220172] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220180] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220207] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220219] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220227] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220256] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220269] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220278] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220308] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220320] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220328] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220357] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220369] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220377] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220400] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220412] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220421] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220447] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220459] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220467] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220490] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220502] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220511] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220539] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220552] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220561] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220586] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220597] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220606] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.608 [2024-11-03 15:46:15.220630] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.608 [2024-11-03 15:46:15.220636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:37.608 [2024-11-03 15:46:15.220642] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.608 [2024-11-03 15:46:15.220651] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220677] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220689] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220697] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220722] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220734] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220743] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220771] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220783] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220792] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220816] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220830] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220838] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220867] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220878] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220887] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220912] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220924] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220932] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.220955] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.220961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.220971] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220980] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.220987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221008] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221020] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221029] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221057] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221069] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221078] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221103] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221117] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221125] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221149] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221160] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221169] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221192] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221203] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221212] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221239] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221250] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221259] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221282] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221294] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221302] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221329] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221341] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221350] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221376] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221388] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221396] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221421] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221433] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221441] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.609 [2024-11-03 15:46:15.221468] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.609 [2024-11-03 15:46:15.221474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:37.609 [2024-11-03 15:46:15.221480] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.609 [2024-11-03 15:46:15.221488] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221519] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221531] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221539] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221564] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221576] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221585] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221608] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221619] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221628] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221658] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221669] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221678] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221703] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221714] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221723] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221746] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221758] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221766] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221793] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221805] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221813] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221838] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221850] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221858] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221891] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221903] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221911] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.221937] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.221943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.221949] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.221958] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.225971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.225981] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.225987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.225993] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.226002] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.226010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.610 [2024-11-03 15:46:15.226031] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.610 [2024-11-03 15:46:15.226036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:27:37.610 [2024-11-03 15:46:15.226043] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.226049] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:27:37.610 128 00:27:37.610 Transport Service Identifier: 4420 00:27:37.610 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:37.610 Transport Address: 192.168.100.8 00:27:37.610 Transport Specific Address Subtype - RDMA 00:27:37.610 RDMA QP Service Type: 1 (Reliable Connected) 00:27:37.610 RDMA Provider Type: 1 (No provider specified) 00:27:37.610 RDMA CM Service: 1 (RDMA_CM) 00:27:37.610 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:37.610 [2024-11-03 15:46:15.297559] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:37.610 [2024-11-03 15:46:15.297605] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399816 ] 00:27:37.610 [2024-11-03 15:46:15.358134] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:37.610 [2024-11-03 15:46:15.358208] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:27:37.610 [2024-11-03 15:46:15.358224] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:27:37.610 [2024-11-03 15:46:15.358229] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:27:37.610 [2024-11-03 15:46:15.358255] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:37.610 [2024-11-03 15:46:15.368528] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:27:37.610 [2024-11-03 15:46:15.378599] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:37.610 [2024-11-03 15:46:15.378610] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:27:37.610 [2024-11-03 15:46:15.378617] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378624] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378630] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378637] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378643] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378649] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378655] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378661] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378667] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378674] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378680] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378686] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378692] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378698] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.610 [2024-11-03 15:46:15.378705] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378711] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378717] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378723] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378729] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378735] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378741] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378748] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378754] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378760] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378766] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378772] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378778] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378785] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378791] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378800] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378806] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378812] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:27:37.611 [2024-11-03 15:46:15.378817] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:27:37.611 [2024-11-03 15:46:15.378822] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:27:37.611 [2024-11-03 15:46:15.378837] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.378849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x183000 00:27:37.611 [2024-11-03 15:46:15.383972] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.383981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.383988] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.383995] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:37.611 [2024-11-03 15:46:15.384002] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384009] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384020] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384047] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384059] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384065] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384072] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384079] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384101] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384114] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384119] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384126] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384134] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384162] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384174] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384180] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384188] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384214] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384225] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:37.611 [2024-11-03 15:46:15.384232] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384238] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384244] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384351] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:37.611 [2024-11-03 15:46:15.384357] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384366] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384390] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384402] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:37.611 [2024-11-03 15:46:15.384408] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384416] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384439] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384451] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:37.611 [2024-11-03 15:46:15.384457] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:37.611 [2024-11-03 15:46:15.384463] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384469] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:37.611 [2024-11-03 15:46:15.384480] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:37.611 [2024-11-03 15:46:15.384489] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183000 00:27:37.611 [2024-11-03 15:46:15.384533] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384547] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:37.611 [2024-11-03 15:46:15.384553] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:37.611 [2024-11-03 15:46:15.384558] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:37.611 [2024-11-03 15:46:15.384563] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:37.611 [2024-11-03 15:46:15.384569] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:37.611 [2024-11-03 15:46:15.384575] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:37.611 [2024-11-03 15:46:15.384581] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384593] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:37.611 [2024-11-03 15:46:15.384603] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.611 [2024-11-03 15:46:15.384611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.611 [2024-11-03 15:46:15.384629] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.611 [2024-11-03 15:46:15.384634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:37.611 [2024-11-03 15:46:15.384643] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.612 [2024-11-03 15:46:15.384657] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.612 [2024-11-03 15:46:15.384670] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.612 [2024-11-03 15:46:15.384684] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.612 [2024-11-03 15:46:15.384696] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384702] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384714] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384722] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.384749] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.384754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.384761] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:37.612 [2024-11-03 15:46:15.384767] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384773] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384780] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384789] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384796] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.384824] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.384882] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384888] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384895] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384903] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183000 00:27:37.612 [2024-11-03 15:46:15.384935] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.384959] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:37.612 [2024-11-03 15:46:15.384973] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384979] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.384987] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.384995] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183000 00:27:37.612 [2024-11-03 15:46:15.385035] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385051] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385057] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385065] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385073] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183000 00:27:37.612 [2024-11-03 15:46:15.385102] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385118] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385124] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385131] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385140] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385147] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385154] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385160] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385166] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:37.612 [2024-11-03 15:46:15.385172] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:37.612 [2024-11-03 15:46:15.385178] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:37.612 [2024-11-03 15:46:15.385192] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.385206] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.612 [2024-11-03 15:46:15.385224] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385236] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385244] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385255] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385264] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.385294] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385306] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385315] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.385351] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385362] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385371] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.612 [2024-11-03 15:46:15.385398] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.612 [2024-11-03 15:46:15.385404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:27:37.612 [2024-11-03 15:46:15.385410] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385423] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183000 00:27:37.612 [2024-11-03 15:46:15.385431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183000 00:27:37.612 [2024-11-03 15:46:15.385439] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183000 00:27:37.613 [2024-11-03 15:46:15.385454] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183000 00:27:37.613 [2024-11-03 15:46:15.385469] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183000 00:27:37.613 [2024-11-03 15:46:15.385485] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.613 [2024-11-03 15:46:15.385494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:37.613 [2024-11-03 15:46:15.385507] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385513] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.613 [2024-11-03 15:46:15.385518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:37.613 [2024-11-03 15:46:15.385530] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385536] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.613 [2024-11-03 15:46:15.385541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:37.613 [2024-11-03 15:46:15.385548] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.613 [2024-11-03 15:46:15.385554] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.613 [2024-11-03 15:46:15.385559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:37.613 [2024-11-03 15:46:15.385569] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.613 ===================================================== 00:27:37.613 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.613 ===================================================== 00:27:37.613 Controller Capabilities/Features 00:27:37.613 ================================ 00:27:37.613 Vendor ID: 8086 00:27:37.613 Subsystem Vendor ID: 8086 00:27:37.613 Serial Number: SPDK00000000000001 00:27:37.613 Model Number: SPDK bdev Controller 00:27:37.613 Firmware Version: 25.01 00:27:37.613 Recommended Arb Burst: 6 00:27:37.613 IEEE OUI Identifier: e4 d2 5c 00:27:37.613 Multi-path I/O 00:27:37.613 May have multiple subsystem ports: Yes 00:27:37.613 May have multiple controllers: Yes 00:27:37.613 Associated with SR-IOV VF: No 00:27:37.613 Max Data Transfer Size: 131072 00:27:37.613 Max Number of Namespaces: 32 00:27:37.613 Max Number of I/O Queues: 127 00:27:37.613 NVMe Specification Version (VS): 1.3 00:27:37.613 NVMe Specification Version (Identify): 1.3 00:27:37.613 Maximum Queue Entries: 128 00:27:37.613 Contiguous Queues Required: Yes 00:27:37.613 Arbitration Mechanisms Supported 00:27:37.613 Weighted Round Robin: Not Supported 00:27:37.613 Vendor Specific: Not Supported 00:27:37.613 Reset Timeout: 15000 ms 00:27:37.613 Doorbell Stride: 4 bytes 00:27:37.613 NVM Subsystem Reset: Not Supported 00:27:37.613 Command Sets Supported 00:27:37.613 NVM Command Set: Supported 00:27:37.613 Boot Partition: Not Supported 00:27:37.613 Memory Page Size Minimum: 4096 bytes 00:27:37.613 Memory Page Size Maximum: 4096 bytes 00:27:37.613 Persistent Memory Region: Not Supported 00:27:37.613 Optional Asynchronous Events Supported 00:27:37.613 Namespace Attribute Notices: Supported 00:27:37.613 Firmware Activation Notices: Not Supported 00:27:37.613 ANA Change Notices: Not Supported 00:27:37.613 PLE Aggregate Log Change Notices: Not Supported 00:27:37.613 LBA Status Info Alert Notices: Not Supported 00:27:37.613 EGE Aggregate Log Change Notices: Not Supported 00:27:37.613 Normal NVM Subsystem Shutdown event: Not Supported 00:27:37.613 Zone Descriptor Change Notices: Not Supported 00:27:37.613 Discovery Log Change Notices: Not Supported 00:27:37.613 Controller Attributes 00:27:37.613 128-bit Host Identifier: Supported 00:27:37.613 Non-Operational Permissive Mode: Not Supported 00:27:37.613 NVM Sets: Not Supported 00:27:37.613 Read Recovery Levels: Not Supported 00:27:37.613 Endurance Groups: Not Supported 00:27:37.613 Predictable Latency Mode: Not Supported 00:27:37.613 Traffic Based Keep ALive: Not Supported 00:27:37.613 Namespace Granularity: Not Supported 00:27:37.613 SQ Associations: Not Supported 00:27:37.613 UUID List: Not Supported 00:27:37.613 Multi-Domain Subsystem: Not Supported 00:27:37.613 Fixed Capacity Management: Not Supported 00:27:37.613 Variable Capacity Management: Not Supported 00:27:37.613 Delete Endurance Group: Not Supported 00:27:37.613 Delete NVM Set: Not Supported 00:27:37.613 Extended LBA Formats Supported: Not Supported 00:27:37.613 Flexible Data Placement Supported: Not Supported 00:27:37.613 00:27:37.613 Controller Memory Buffer Support 00:27:37.613 ================================ 00:27:37.613 Supported: No 00:27:37.613 00:27:37.613 Persistent Memory Region Support 00:27:37.613 ================================ 00:27:37.613 Supported: No 00:27:37.613 00:27:37.613 Admin Command Set Attributes 00:27:37.613 ============================ 00:27:37.613 Security Send/Receive: Not Supported 00:27:37.613 Format NVM: Not Supported 00:27:37.613 Firmware Activate/Download: Not Supported 00:27:37.613 Namespace Management: Not Supported 00:27:37.613 Device Self-Test: Not Supported 00:27:37.613 Directives: Not Supported 00:27:37.613 NVMe-MI: Not Supported 00:27:37.613 Virtualization Management: Not Supported 00:27:37.613 Doorbell Buffer Config: Not Supported 00:27:37.613 Get LBA Status Capability: Not Supported 00:27:37.613 Command & Feature Lockdown Capability: Not Supported 00:27:37.613 Abort Command Limit: 4 00:27:37.613 Async Event Request Limit: 4 00:27:37.613 Number of Firmware Slots: N/A 00:27:37.613 Firmware Slot 1 Read-Only: N/A 00:27:37.613 Firmware Activation Without Reset: N/A 00:27:37.613 Multiple Update Detection Support: N/A 00:27:37.613 Firmware Update Granularity: No Information Provided 00:27:37.613 Per-Namespace SMART Log: No 00:27:37.613 Asymmetric Namespace Access Log Page: Not Supported 00:27:37.613 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:37.613 Command Effects Log Page: Supported 00:27:37.613 Get Log Page Extended Data: Supported 00:27:37.613 Telemetry Log Pages: Not Supported 00:27:37.613 Persistent Event Log Pages: Not Supported 00:27:37.613 Supported Log Pages Log Page: May Support 00:27:37.613 Commands Supported & Effects Log Page: Not Supported 00:27:37.613 Feature Identifiers & Effects Log Page:May Support 00:27:37.613 NVMe-MI Commands & Effects Log Page: May Support 00:27:37.613 Data Area 4 for Telemetry Log: Not Supported 00:27:37.613 Error Log Page Entries Supported: 128 00:27:37.613 Keep Alive: Supported 00:27:37.613 Keep Alive Granularity: 10000 ms 00:27:37.613 00:27:37.613 NVM Command Set Attributes 00:27:37.613 ========================== 00:27:37.613 Submission Queue Entry Size 00:27:37.613 Max: 64 00:27:37.613 Min: 64 00:27:37.613 Completion Queue Entry Size 00:27:37.613 Max: 16 00:27:37.613 Min: 16 00:27:37.613 Number of Namespaces: 32 00:27:37.613 Compare Command: Supported 00:27:37.613 Write Uncorrectable Command: Not Supported 00:27:37.613 Dataset Management Command: Supported 00:27:37.613 Write Zeroes Command: Supported 00:27:37.613 Set Features Save Field: Not Supported 00:27:37.613 Reservations: Supported 00:27:37.613 Timestamp: Not Supported 00:27:37.613 Copy: Supported 00:27:37.613 Volatile Write Cache: Present 00:27:37.613 Atomic Write Unit (Normal): 1 00:27:37.613 Atomic Write Unit (PFail): 1 00:27:37.613 Atomic Compare & Write Unit: 1 00:27:37.613 Fused Compare & Write: Supported 00:27:37.613 Scatter-Gather List 00:27:37.613 SGL Command Set: Supported 00:27:37.613 SGL Keyed: Supported 00:27:37.613 SGL Bit Bucket Descriptor: Not Supported 00:27:37.613 SGL Metadata Pointer: Not Supported 00:27:37.613 Oversized SGL: Not Supported 00:27:37.613 SGL Metadata Address: Not Supported 00:27:37.614 SGL Offset: Supported 00:27:37.614 Transport SGL Data Block: Not Supported 00:27:37.614 Replay Protected Memory Block: Not Supported 00:27:37.614 00:27:37.614 Firmware Slot Information 00:27:37.614 ========================= 00:27:37.614 Active slot: 1 00:27:37.614 Slot 1 Firmware Revision: 25.01 00:27:37.614 00:27:37.614 00:27:37.614 Commands Supported and Effects 00:27:37.614 ============================== 00:27:37.614 Admin Commands 00:27:37.614 -------------- 00:27:37.614 Get Log Page (02h): Supported 00:27:37.614 Identify (06h): Supported 00:27:37.614 Abort (08h): Supported 00:27:37.614 Set Features (09h): Supported 00:27:37.614 Get Features (0Ah): Supported 00:27:37.614 Asynchronous Event Request (0Ch): Supported 00:27:37.614 Keep Alive (18h): Supported 00:27:37.614 I/O Commands 00:27:37.614 ------------ 00:27:37.614 Flush (00h): Supported LBA-Change 00:27:37.614 Write (01h): Supported LBA-Change 00:27:37.614 Read (02h): Supported 00:27:37.614 Compare (05h): Supported 00:27:37.614 Write Zeroes (08h): Supported LBA-Change 00:27:37.614 Dataset Management (09h): Supported LBA-Change 00:27:37.614 Copy (19h): Supported LBA-Change 00:27:37.614 00:27:37.614 Error Log 00:27:37.614 ========= 00:27:37.614 00:27:37.614 Arbitration 00:27:37.614 =========== 00:27:37.614 Arbitration Burst: 1 00:27:37.614 00:27:37.614 Power Management 00:27:37.614 ================ 00:27:37.614 Number of Power States: 1 00:27:37.614 Current Power State: Power State #0 00:27:37.614 Power State #0: 00:27:37.614 Max Power: 0.00 W 00:27:37.614 Non-Operational State: Operational 00:27:37.614 Entry Latency: Not Reported 00:27:37.614 Exit Latency: Not Reported 00:27:37.614 Relative Read Throughput: 0 00:27:37.614 Relative Read Latency: 0 00:27:37.614 Relative Write Throughput: 0 00:27:37.614 Relative Write Latency: 0 00:27:37.614 Idle Power: Not Reported 00:27:37.614 Active Power: Not Reported 00:27:37.614 Non-Operational Permissive Mode: Not Supported 00:27:37.614 00:27:37.614 Health Information 00:27:37.614 ================== 00:27:37.614 Critical Warnings: 00:27:37.614 Available Spare Space: OK 00:27:37.614 Temperature: OK 00:27:37.614 Device Reliability: OK 00:27:37.614 Read Only: No 00:27:37.614 Volatile Memory Backup: OK 00:27:37.614 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:37.614 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:37.614 Available Spare: 0% 00:27:37.614 Available Spare Threshold: 0% 00:27:37.614 Life Percentage [2024-11-03 15:46:15.385646] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.385675] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.385681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385687] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385714] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:37.614 [2024-11-03 15:46:15.385723] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35615 doesn't match qid 00:27:37.614 [2024-11-03 15:46:15.385738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32650 cdw0:81125bd0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385744] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35615 doesn't match qid 00:27:37.614 [2024-11-03 15:46:15.385752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32650 cdw0:81125bd0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385758] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35615 doesn't match qid 00:27:37.614 [2024-11-03 15:46:15.385766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32650 cdw0:81125bd0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385772] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35615 doesn't match qid 00:27:37.614 [2024-11-03 15:46:15.385780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32650 cdw0:81125bd0 sqhd:9c30 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.385817] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.385823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385831] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.385846] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385864] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.385870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385876] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:37.614 [2024-11-03 15:46:15.385882] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:37.614 [2024-11-03 15:46:15.385887] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385896] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.385919] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.385924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385931] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385940] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.385974] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.385980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.385986] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.385994] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386020] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.386025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.386031] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386040] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386066] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.386071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.386077] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386086] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386109] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.386121] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386130] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386157] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.386162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.386169] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386177] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386211] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.614 [2024-11-03 15:46:15.386217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.614 [2024-11-03 15:46:15.386223] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386232] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.614 [2024-11-03 15:46:15.386239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.614 [2024-11-03 15:46:15.386261] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.615 [2024-11-03 15:46:15.386266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:37.615 [2024-11-03 15:46:15.386273] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.615 [2024-11-03 15:46:15.386282] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.615 [2024-11-03 15:46:15.386289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.615 [2024-11-03 15:46:15.386311] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.615 [2024-11-03 15:46:15.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:37.615 [2024-11-03 15:46:15.386323] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386332] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.875 [2024-11-03 15:46:15.386358] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.875 [2024-11-03 15:46:15.386364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:37.875 [2024-11-03 15:46:15.386370] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386379] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.875 [2024-11-03 15:46:15.386408] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.875 [2024-11-03 15:46:15.386414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:37.875 [2024-11-03 15:46:15.386420] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386429] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.875 [2024-11-03 15:46:15.386456] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.875 [2024-11-03 15:46:15.386461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:27:37.875 [2024-11-03 15:46:15.386468] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386477] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.875 [2024-11-03 15:46:15.386485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.875 [2024-11-03 15:46:15.386500] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386512] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386521] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386551] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386562] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386571] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386598] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386610] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386618] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386651] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386663] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386672] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386700] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386712] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386720] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386751] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386763] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386772] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386801] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386813] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386821] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386854] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386865] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386874] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386898] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386910] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386918] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386943] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.386949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.386955] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386964] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.386978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.386996] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387008] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387017] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387046] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387058] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387066] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387093] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387105] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387114] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387142] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387154] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387163] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387188] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387200] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387209] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387239] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387251] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387261] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387288] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387300] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387309] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387332] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387344] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387352] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387375] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387387] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387396] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.876 [2024-11-03 15:46:15.387403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.876 [2024-11-03 15:46:15.387421] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.876 [2024-11-03 15:46:15.387426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:27:37.876 [2024-11-03 15:46:15.387433] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387442] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387467] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387478] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387487] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387516] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387527] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387537] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387563] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387574] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387583] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387612] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387623] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387632] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387662] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387674] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387682] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387709] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387721] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387729] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387752] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387764] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387772] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387797] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387810] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387819] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387842] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387854] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387862] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387889] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387901] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387909] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.387942] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.387953] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.387962] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.391977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:27:37.877 [2024-11-03 15:46:15.391996] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:27:37.877 [2024-11-03 15:46:15.392002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:27:37.877 [2024-11-03 15:46:15.392008] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183000 00:27:37.877 [2024-11-03 15:46:15.392015] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:27:37.877 Used: 0% 00:27:37.877 Data Units Read: 0 00:27:37.877 Data Units Written: 0 00:27:37.877 Host Read Commands: 0 00:27:37.877 Host Write Commands: 0 00:27:37.877 Controller Busy Time: 0 minutes 00:27:37.877 Power Cycles: 0 00:27:37.877 Power On Hours: 0 hours 00:27:37.877 Unsafe Shutdowns: 0 00:27:37.877 Unrecoverable Media Errors: 0 00:27:37.877 Lifetime Error Log Entries: 0 00:27:37.877 Warning Temperature Time: 0 minutes 00:27:37.877 Critical Temperature Time: 0 minutes 00:27:37.877 00:27:37.877 Number of Queues 00:27:37.877 ================ 00:27:37.877 Number of I/O Submission Queues: 127 00:27:37.877 Number of I/O Completion Queues: 127 00:27:37.877 00:27:37.877 Active Namespaces 00:27:37.877 ================= 00:27:37.877 Namespace ID:1 00:27:37.877 Error Recovery Timeout: Unlimited 00:27:37.877 Command Set Identifier: NVM (00h) 00:27:37.877 Deallocate: Supported 00:27:37.877 Deallocated/Unwritten Error: Not Supported 00:27:37.877 Deallocated Read Value: Unknown 00:27:37.877 Deallocate in Write Zeroes: Not Supported 00:27:37.877 Deallocated Guard Field: 0xFFFF 00:27:37.877 Flush: Supported 00:27:37.877 Reservation: Supported 00:27:37.877 Namespace Sharing Capabilities: Multiple Controllers 00:27:37.877 Size (in LBAs): 131072 (0GiB) 00:27:37.877 Capacity (in LBAs): 131072 (0GiB) 00:27:37.877 Utilization (in LBAs): 131072 (0GiB) 00:27:37.877 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:37.877 EUI64: ABCDEF0123456789 00:27:37.877 UUID: 5797f25b-04f3-4bd7-8819-792dc3dc671a 00:27:37.877 Thin Provisioning: Not Supported 00:27:37.877 Per-NS Atomic Units: Yes 00:27:37.877 Atomic Boundary Size (Normal): 0 00:27:37.877 Atomic Boundary Size (PFail): 0 00:27:37.877 Atomic Boundary Offset: 0 00:27:37.877 Maximum Single Source Range Length: 65535 00:27:37.877 Maximum Copy Length: 65535 00:27:37.877 Maximum Source Range Count: 1 00:27:37.877 NGUID/EUI64 Never Reused: No 00:27:37.877 Namespace Write Protected: No 00:27:37.877 Number of LBA Formats: 1 00:27:37.877 Current LBA Format: LBA Format #00 00:27:37.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:37.877 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:37.877 rmmod nvme_rdma 00:27:37.877 rmmod nvme_fabrics 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2399693 ']' 00:27:37.877 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2399693 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2399693 ']' 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2399693 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2399693 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2399693' 00:27:37.878 killing process with pid 2399693 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2399693 00:27:37.878 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2399693 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:38.136 00:27:38.136 real 0m8.310s 00:27:38.136 user 0m6.311s 00:27:38.136 sys 0m5.686s 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.136 ************************************ 00:27:38.136 END TEST nvmf_identify 00:27:38.136 ************************************ 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.136 ************************************ 00:27:38.136 START TEST nvmf_perf 00:27:38.136 ************************************ 00:27:38.136 15:46:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:27:38.395 * Looking for test storage... 00:27:38.395 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:38.395 15:46:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.395 15:46:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.396 15:46:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.396 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.397 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.397 15:46:16 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.039 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:45.040 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:45.040 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:45.040 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:45.040 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:45.040 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:45.040 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:45.040 altname enp217s0f0np0 00:27:45.040 altname ens818f0np0 00:27:45.040 inet 192.168.100.8/24 scope global mlx_0_0 00:27:45.040 valid_lft forever preferred_lft forever 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:45.040 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:45.041 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:45.041 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:45.041 altname enp217s0f1np1 00:27:45.041 altname ens818f1np1 00:27:45.041 inet 192.168.100.9/24 scope global mlx_0_1 00:27:45.041 valid_lft forever preferred_lft forever 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:45.041 192.168.100.9' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:45.041 192.168.100.9' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:45.041 192.168.100.9' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2403142 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2403142 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2403142 ']' 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.041 [2024-11-03 15:46:22.597045] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:27:45.041 [2024-11-03 15:46:22.597103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.041 [2024-11-03 15:46:22.675069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.041 [2024-11-03 15:46:22.698029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.041 [2024-11-03 15:46:22.698073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.041 [2024-11-03 15:46:22.698082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.041 [2024-11-03 15:46:22.698090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.041 [2024-11-03 15:46:22.698097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.041 [2024-11-03 15:46:22.699879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.041 [2024-11-03 15:46:22.699984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.041 [2024-11-03 15:46:22.700037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.041 [2024-11-03 15:46:22.700040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.041 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.314 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.314 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:45.314 15:46:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:48.604 15:46:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:48.604 15:46:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:27:48.604 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:27:48.864 [2024-11-03 15:46:26.494510] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:27:48.864 [2024-11-03 15:46:26.515695] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1859ef0/0x1736580) succeed. 00:27:48.864 [2024-11-03 15:46:26.525131] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x185c590/0x1777c20) succeed. 00:27:48.864 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.123 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:49.123 15:46:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.382 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:49.382 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:49.641 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:49.641 [2024-11-03 15:46:27.418320] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:49.900 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:49.900 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:27:49.901 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:49.901 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:49.901 15:46:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:27:51.278 Initializing NVMe Controllers 00:27:51.278 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:27:51.278 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:27:51.278 Initialization complete. Launching workers. 00:27:51.278 ======================================================== 00:27:51.278 Latency(us) 00:27:51.278 Device Information : IOPS MiB/s Average min max 00:27:51.278 PCIE (0000:d8:00.0) NSID 1 from core 0: 102391.26 399.97 312.21 34.94 4229.56 00:27:51.278 ======================================================== 00:27:51.278 Total : 102391.26 399.97 312.21 34.94 4229.56 00:27:51.278 00:27:51.278 15:46:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:54.568 Initializing NVMe Controllers 00:27:54.568 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.568 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.568 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:54.568 Initialization complete. Launching workers. 00:27:54.568 ======================================================== 00:27:54.568 Latency(us) 00:27:54.568 Device Information : IOPS MiB/s Average min max 00:27:54.568 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6681.99 26.10 148.71 51.95 8019.13 00:27:54.568 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5137.99 20.07 193.45 70.16 8077.75 00:27:54.568 ======================================================== 00:27:54.568 Total : 11819.99 46.17 168.16 51.95 8077.75 00:27:54.568 00:27:54.568 15:46:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:57.858 Initializing NVMe Controllers 00:27:57.858 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:57.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:57.858 Initialization complete. Launching workers. 00:27:57.858 ======================================================== 00:27:57.858 Latency(us) 00:27:57.858 Device Information : IOPS MiB/s Average min max 00:27:57.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18564.83 72.52 1722.41 491.39 7111.64 00:27:57.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4034.19 15.76 7989.19 5226.38 14965.41 00:27:57.858 ======================================================== 00:27:57.858 Total : 22599.02 88.28 2841.10 491.39 14965.41 00:27:57.858 00:27:58.117 15:46:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:27:58.117 15:46:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:02.308 Initializing NVMe Controllers 00:28:02.308 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.308 Controller IO queue size 128, less than required. 00:28:02.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.308 Controller IO queue size 128, less than required. 00:28:02.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.308 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.308 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:02.308 Initialization complete. Launching workers. 00:28:02.308 ======================================================== 00:28:02.308 Latency(us) 00:28:02.308 Device Information : IOPS MiB/s Average min max 00:28:02.308 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3987.03 996.76 32144.24 10557.63 91045.92 00:28:02.308 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4025.99 1006.50 31473.29 14609.58 66215.88 00:28:02.308 ======================================================== 00:28:02.308 Total : 8013.02 2003.26 31807.14 10557.63 91045.92 00:28:02.308 00:28:02.308 15:46:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:28:02.877 No valid NVMe controllers or AIO or URING devices found 00:28:02.877 Initializing NVMe Controllers 00:28:02.877 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.877 Controller IO queue size 128, less than required. 00:28:02.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.877 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:02.877 Controller IO queue size 128, less than required. 00:28:02.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.877 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:02.877 WARNING: Some requested NVMe devices were skipped 00:28:02.877 15:46:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:28:07.073 Initializing NVMe Controllers 00:28:07.073 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.073 Controller IO queue size 128, less than required. 00:28:07.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.073 Controller IO queue size 128, less than required. 00:28:07.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.073 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.073 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:07.073 Initialization complete. Launching workers. 00:28:07.073 00:28:07.073 ==================== 00:28:07.073 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:07.073 RDMA transport: 00:28:07.073 dev name: mlx5_0 00:28:07.073 polls: 406529 00:28:07.073 idle_polls: 402944 00:28:07.073 completions: 45282 00:28:07.073 queued_requests: 1 00:28:07.073 total_send_wrs: 22641 00:28:07.073 send_doorbell_updates: 3351 00:28:07.073 total_recv_wrs: 22768 00:28:07.073 recv_doorbell_updates: 3356 00:28:07.073 --------------------------------- 00:28:07.073 00:28:07.073 ==================== 00:28:07.073 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:07.073 RDMA transport: 00:28:07.073 dev name: mlx5_0 00:28:07.073 polls: 406158 00:28:07.073 idle_polls: 405870 00:28:07.073 completions: 20222 00:28:07.073 queued_requests: 1 00:28:07.073 total_send_wrs: 10111 00:28:07.073 send_doorbell_updates: 260 00:28:07.073 total_recv_wrs: 10238 00:28:07.073 recv_doorbell_updates: 261 00:28:07.073 --------------------------------- 00:28:07.073 ======================================================== 00:28:07.073 Latency(us) 00:28:07.073 Device Information : IOPS MiB/s Average min max 00:28:07.073 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5660.00 1415.00 22651.20 10507.67 68097.52 00:28:07.073 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2527.50 631.88 50369.60 30602.17 76770.49 00:28:07.073 ======================================================== 00:28:07.073 Total : 8187.50 2046.88 31207.93 10507.67 76770.49 00:28:07.073 00:28:07.332 15:46:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:07.332 15:46:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.332 15:46:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:07.332 15:46:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:28:07.332 15:46:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9011bc6d-f4b8-4aec-bd44-a663a9168224 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9011bc6d-f4b8-4aec-bd44-a663a9168224 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=9011bc6d-f4b8-4aec-bd44-a663a9168224 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:28:13.948 { 00:28:13.948 "uuid": "9011bc6d-f4b8-4aec-bd44-a663a9168224", 00:28:13.948 "name": "lvs_0", 00:28:13.948 "base_bdev": "Nvme0n1", 00:28:13.948 "total_data_clusters": 476466, 00:28:13.948 "free_clusters": 476466, 00:28:13.948 "block_size": 512, 00:28:13.948 "cluster_size": 4194304 00:28:13.948 } 00:28:13.948 ]' 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="9011bc6d-f4b8-4aec-bd44-a663a9168224") .free_clusters' 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=476466 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="9011bc6d-f4b8-4aec-bd44-a663a9168224") .cluster_size' 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=1905864 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 1905864 00:28:13.948 1905864 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:13.948 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9011bc6d-f4b8-4aec-bd44-a663a9168224 lbd_0 20480 00:28:14.207 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4c91dfb0-349b-47bd-affd-a37a484e39bf 00:28:14.207 15:46:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4c91dfb0-349b-47bd-affd-a37a484e39bf lvs_n_0 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e2dfc484-4488-42a2-9f11-db2aeab2aa13 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e2dfc484-4488-42a2-9f11-db2aeab2aa13 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=e2dfc484-4488-42a2-9f11-db2aeab2aa13 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:28:16.744 15:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:28:16.744 { 00:28:16.744 "uuid": "9011bc6d-f4b8-4aec-bd44-a663a9168224", 00:28:16.744 "name": "lvs_0", 00:28:16.744 "base_bdev": "Nvme0n1", 00:28:16.744 "total_data_clusters": 476466, 00:28:16.744 "free_clusters": 471346, 00:28:16.744 "block_size": 512, 00:28:16.744 "cluster_size": 4194304 00:28:16.744 }, 00:28:16.744 { 00:28:16.744 "uuid": "e2dfc484-4488-42a2-9f11-db2aeab2aa13", 00:28:16.744 "name": "lvs_n_0", 00:28:16.744 "base_bdev": "4c91dfb0-349b-47bd-affd-a37a484e39bf", 00:28:16.744 "total_data_clusters": 5114, 00:28:16.744 "free_clusters": 5114, 00:28:16.744 "block_size": 512, 00:28:16.744 "cluster_size": 4194304 00:28:16.744 } 00:28:16.744 ]' 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="e2dfc484-4488-42a2-9f11-db2aeab2aa13") .free_clusters' 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="e2dfc484-4488-42a2-9f11-db2aeab2aa13") .cluster_size' 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:28:16.744 20456 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2dfc484-4488-42a2-9f11-db2aeab2aa13 lbd_nest_0 20456 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=09db43ca-a6c2-4bc1-b26e-88e784306f2b 00:28:16.744 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.003 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:17.003 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 09db43ca-a6c2-4bc1-b26e-88e784306f2b 00:28:17.262 15:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:17.262 15:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:17.262 15:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:17.262 15:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:17.262 15:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:17.262 15:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:29.471 Initializing NVMe Controllers 00:28:29.471 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.472 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.472 Initialization complete. Launching workers. 00:28:29.472 ======================================================== 00:28:29.472 Latency(us) 00:28:29.472 Device Information : IOPS MiB/s Average min max 00:28:29.472 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5868.90 2.87 169.97 67.98 6000.83 00:28:29.472 ======================================================== 00:28:29.472 Total : 5868.90 2.87 169.97 67.98 6000.83 00:28:29.472 00:28:29.472 15:47:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:29.472 15:47:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:41.681 Initializing NVMe Controllers 00:28:41.681 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.681 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.681 Initialization complete. Launching workers. 00:28:41.681 ======================================================== 00:28:41.681 Latency(us) 00:28:41.681 Device Information : IOPS MiB/s Average min max 00:28:41.681 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2667.90 333.49 374.59 155.92 8090.55 00:28:41.681 ======================================================== 00:28:41.681 Total : 2667.90 333.49 374.59 155.92 8090.55 00:28:41.681 00:28:41.681 15:47:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:41.681 15:47:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:41.681 15:47:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:51.832 Initializing NVMe Controllers 00:28:51.832 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.832 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.832 Initialization complete. Launching workers. 00:28:51.832 ======================================================== 00:28:51.832 Latency(us) 00:28:51.832 Device Information : IOPS MiB/s Average min max 00:28:51.832 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11452.30 5.59 2792.33 993.67 9077.39 00:28:51.832 ======================================================== 00:28:51.832 Total : 11452.30 5.59 2792.33 993.67 9077.39 00:28:51.832 00:28:51.832 15:47:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:51.832 15:47:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:04.049 Initializing NVMe Controllers 00:29:04.049 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.049 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.049 Initialization complete. Launching workers. 00:29:04.049 ======================================================== 00:29:04.049 Latency(us) 00:29:04.049 Device Information : IOPS MiB/s Average min max 00:29:04.049 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3969.40 496.18 8063.60 5924.61 16036.23 00:29:04.049 ======================================================== 00:29:04.049 Total : 3969.40 496.18 8063.60 5924.61 16036.23 00:29:04.049 00:29:04.049 15:47:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:04.049 15:47:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.049 15:47:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:14.055 Initializing NVMe Controllers 00:29:14.055 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.055 Controller IO queue size 128, less than required. 00:29:14.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:14.055 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.055 Initialization complete. Launching workers. 00:29:14.055 ======================================================== 00:29:14.055 Latency(us) 00:29:14.055 Device Information : IOPS MiB/s Average min max 00:29:14.055 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18874.20 9.22 6781.24 1990.03 15821.36 00:29:14.055 ======================================================== 00:29:14.055 Total : 18874.20 9.22 6781.24 1990.03 15821.36 00:29:14.055 00:29:14.314 15:47:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.314 15:47:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:26.535 Initializing NVMe Controllers 00:29:26.535 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.535 Controller IO queue size 128, less than required. 00:29:26.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.535 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.535 Initialization complete. Launching workers. 00:29:26.535 ======================================================== 00:29:26.535 Latency(us) 00:29:26.535 Device Information : IOPS MiB/s Average min max 00:29:26.535 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11050.50 1381.31 11582.40 3171.32 24818.83 00:29:26.535 ======================================================== 00:29:26.535 Total : 11050.50 1381.31 11582.40 3171.32 24818.83 00:29:26.535 00:29:26.535 15:48:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.535 15:48:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09db43ca-a6c2-4bc1-b26e-88e784306f2b 00:29:26.535 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:26.535 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c91dfb0-349b-47bd-affd-a37a484e39bf 00:29:26.795 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:27.055 rmmod nvme_rdma 00:29:27.055 rmmod nvme_fabrics 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2403142 ']' 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2403142 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2403142 ']' 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2403142 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:27.055 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2403142 00:29:27.316 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:27.316 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:27.316 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2403142' 00:29:27.316 killing process with pid 2403142 00:29:27.316 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2403142 00:29:27.316 15:48:04 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2403142 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:29.856 00:29:29.856 real 1m51.375s 00:29:29.856 user 7m1.871s 00:29:29.856 sys 0m7.041s 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:29.856 ************************************ 00:29:29.856 END TEST nvmf_perf 00:29:29.856 ************************************ 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.856 ************************************ 00:29:29.856 START TEST nvmf_fio_host 00:29:29.856 ************************************ 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:29.856 * Looking for test storage... 00:29:29.856 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.856 --rc genhtml_branch_coverage=1 00:29:29.856 --rc genhtml_function_coverage=1 00:29:29.856 --rc genhtml_legend=1 00:29:29.856 --rc geninfo_all_blocks=1 00:29:29.856 --rc geninfo_unexecuted_blocks=1 00:29:29.856 00:29:29.856 ' 00:29:29.856 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.857 --rc genhtml_branch_coverage=1 00:29:29.857 --rc genhtml_function_coverage=1 00:29:29.857 --rc genhtml_legend=1 00:29:29.857 --rc geninfo_all_blocks=1 00:29:29.857 --rc geninfo_unexecuted_blocks=1 00:29:29.857 00:29:29.857 ' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.857 --rc genhtml_branch_coverage=1 00:29:29.857 --rc genhtml_function_coverage=1 00:29:29.857 --rc genhtml_legend=1 00:29:29.857 --rc geninfo_all_blocks=1 00:29:29.857 --rc geninfo_unexecuted_blocks=1 00:29:29.857 00:29:29.857 ' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.857 --rc genhtml_branch_coverage=1 00:29:29.857 --rc genhtml_function_coverage=1 00:29:29.857 --rc genhtml_legend=1 00:29:29.857 --rc geninfo_all_blocks=1 00:29:29.857 --rc geninfo_unexecuted_blocks=1 00:29:29.857 00:29:29.857 ' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.857 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.857 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.858 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.858 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.858 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.858 15:48:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:36.433 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:36.433 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:36.433 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.433 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:36.434 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:36.434 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:36.694 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.694 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:36.694 altname enp217s0f0np0 00:29:36.694 altname ens818f0np0 00:29:36.694 inet 192.168.100.8/24 scope global mlx_0_0 00:29:36.694 valid_lft forever preferred_lft forever 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:36.694 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:36.694 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:36.694 altname enp217s0f1np1 00:29:36.694 altname ens818f1np1 00:29:36.694 inet 192.168.100.9/24 scope global mlx_0_1 00:29:36.694 valid_lft forever preferred_lft forever 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:36.694 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:36.695 192.168.100.9' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:36.695 192.168.100.9' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:36.695 192.168.100.9' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2424193 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2424193 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2424193 ']' 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:36.695 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.695 [2024-11-03 15:48:14.412489] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:29:36.695 [2024-11-03 15:48:14.412542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.955 [2024-11-03 15:48:14.490964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.955 [2024-11-03 15:48:14.513623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.955 [2024-11-03 15:48:14.513669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.955 [2024-11-03 15:48:14.513679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.955 [2024-11-03 15:48:14.513689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.955 [2024-11-03 15:48:14.513696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.955 [2024-11-03 15:48:14.515248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.955 [2024-11-03 15:48:14.515347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.955 [2024-11-03 15:48:14.515430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.955 [2024-11-03 15:48:14.515432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.955 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:36.955 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:29:36.955 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:37.215 [2024-11-03 15:48:14.800463] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bc0c50/0x1bc5100) succeed. 00:29:37.215 [2024-11-03 15:48:14.809567] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bc2290/0x1c067a0) succeed. 00:29:37.215 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:37.215 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.215 15:48:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.474 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:37.474 Malloc1 00:29:37.474 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.733 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:37.992 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:37.992 [2024-11-03 15:48:15.768955] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:38.252 15:48:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:29:38.252 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:38.531 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:38.531 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:38.531 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:38.531 15:48:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:38.789 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:38.789 fio-3.35 00:29:38.789 Starting 1 thread 00:29:41.316 00:29:41.316 test: (groupid=0, jobs=1): err= 0: pid=2424846: Sun Nov 3 15:48:18 2024 00:29:41.316 read: IOPS=18.1k, BW=70.8MiB/s (74.3MB/s)(142MiB/2004msec) 00:29:41.316 slat (nsec): min=1336, max=33464, avg=1461.90, stdev=443.81 00:29:41.316 clat (usec): min=1779, max=6359, avg=3503.85, stdev=83.65 00:29:41.316 lat (usec): min=1799, max=6361, avg=3505.31, stdev=83.57 00:29:41.316 clat percentiles (usec): 00:29:41.316 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:29:41.316 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:29:41.316 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3523], 00:29:41.316 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4555], 99.95th=[ 5473], 00:29:41.316 | 99.99th=[ 6325] 00:29:41.316 bw ( KiB/s): min=71144, max=73400, per=100.00%, avg=72548.00, stdev=980.89, samples=4 00:29:41.316 iops : min=17786, max=18350, avg=18137.00, stdev=245.22, samples=4 00:29:41.316 write: IOPS=18.2k, BW=70.9MiB/s (74.4MB/s)(142MiB/2004msec); 0 zone resets 00:29:41.316 slat (nsec): min=1382, max=17300, avg=1540.94, stdev=422.70 00:29:41.316 clat (usec): min=2549, max=6372, avg=3502.29, stdev=83.63 00:29:41.316 lat (usec): min=2559, max=6373, avg=3503.83, stdev=83.57 00:29:41.316 clat percentiles (usec): 00:29:41.316 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:29:41.316 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3490], 00:29:41.316 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3523], 00:29:41.316 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4228], 99.95th=[ 5866], 00:29:41.316 | 99.99th=[ 6325] 00:29:41.316 bw ( KiB/s): min=71224, max=73304, per=100.00%, avg=72656.00, stdev=977.14, samples=4 00:29:41.316 iops : min=17806, max=18326, avg=18164.00, stdev=244.28, samples=4 00:29:41.316 lat (msec) : 2=0.01%, 4=99.84%, 10=0.16% 00:29:41.316 cpu : usr=99.65%, sys=0.00%, ctx=16, majf=0, minf=2 00:29:41.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:41.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:41.316 issued rwts: total=36345,36384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:41.316 00:29:41.316 Run status group 0 (all jobs): 00:29:41.316 READ: bw=70.8MiB/s (74.3MB/s), 70.8MiB/s-70.8MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2004-2004msec 00:29:41.316 WRITE: bw=70.9MiB/s (74.4MB/s), 70.9MiB/s-70.9MiB/s (74.4MB/s-74.4MB/s), io=142MiB (149MB), run=2004-2004msec 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:41.316 15:48:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:41.316 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:41.316 fio-3.35 00:29:41.316 Starting 1 thread 00:29:43.840 00:29:43.840 test: (groupid=0, jobs=1): err= 0: pid=2425360: Sun Nov 3 15:48:21 2024 00:29:43.840 read: IOPS=14.6k, BW=228MiB/s (239MB/s)(449MiB/1970msec) 00:29:43.840 slat (nsec): min=2236, max=49451, avg=2548.64, stdev=904.40 00:29:43.840 clat (usec): min=491, max=8076, avg=1672.43, stdev=1353.97 00:29:43.840 lat (usec): min=493, max=8096, avg=1674.98, stdev=1354.24 00:29:43.840 clat percentiles (usec): 00:29:43.840 | 1.00th=[ 676], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 906], 00:29:43.840 | 30.00th=[ 971], 40.00th=[ 1057], 50.00th=[ 1156], 60.00th=[ 1270], 00:29:43.840 | 70.00th=[ 1418], 80.00th=[ 1647], 90.00th=[ 4752], 95.00th=[ 4817], 00:29:43.840 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7373], 00:29:43.840 | 99.99th=[ 8029] 00:29:43.840 bw ( KiB/s): min=111392, max=117984, per=49.14%, avg=114640.00, stdev=2758.01, samples=4 00:29:43.840 iops : min= 6962, max= 7374, avg=7165.00, stdev=172.38, samples=4 00:29:43.840 write: IOPS=8420, BW=132MiB/s (138MB/s)(233MiB/1772msec); 0 zone resets 00:29:43.840 slat (usec): min=26, max=121, avg=28.60, stdev= 5.00 00:29:43.840 clat (usec): min=4006, max=17439, avg=12340.89, stdev=1694.98 00:29:43.840 lat (usec): min=4035, max=17468, avg=12369.49, stdev=1694.50 00:29:43.840 clat percentiles (usec): 00:29:43.840 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:29:43.840 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:29:43.840 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15139], 00:29:43.840 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:29:43.840 | 99.99th=[17433] 00:29:43.840 bw ( KiB/s): min=112768, max=123264, per=87.91%, avg=118448.00, stdev=4371.21, samples=4 00:29:43.840 iops : min= 7048, max= 7704, avg=7403.00, stdev=273.20, samples=4 00:29:43.840 lat (usec) : 500=0.01%, 750=2.53%, 1000=19.86% 00:29:43.840 lat (msec) : 2=33.27%, 4=1.95%, 10=10.89%, 20=31.49% 00:29:43.840 cpu : usr=96.36%, sys=1.90%, ctx=183, majf=0, minf=2 00:29:43.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:43.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:43.840 issued rwts: total=28727,14922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:43.840 00:29:43.840 Run status group 0 (all jobs): 00:29:43.840 READ: bw=228MiB/s (239MB/s), 228MiB/s-228MiB/s (239MB/s-239MB/s), io=449MiB (471MB), run=1970-1970msec 00:29:43.841 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=233MiB (244MB), run=1772-1772msec 00:29:43.841 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:44.097 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:44.098 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:29:44.098 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:29:44.098 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:29:44.098 15:48:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:29:47.369 Nvme0n1 00:29:47.369 15:48:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7a3ff4c6-532d-43c0-ad59-0a3f57e9e761 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7a3ff4c6-532d-43c0-ad59-0a3f57e9e761 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=7a3ff4c6-532d-43c0-ad59-0a3f57e9e761 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:29:52.621 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:29:52.878 { 00:29:52.878 "uuid": "7a3ff4c6-532d-43c0-ad59-0a3f57e9e761", 00:29:52.878 "name": "lvs_0", 00:29:52.878 "base_bdev": "Nvme0n1", 00:29:52.878 "total_data_clusters": 1862, 00:29:52.878 "free_clusters": 1862, 00:29:52.878 "block_size": 512, 00:29:52.878 "cluster_size": 1073741824 00:29:52.878 } 00:29:52.878 ]' 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="7a3ff4c6-532d-43c0-ad59-0a3f57e9e761") .free_clusters' 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1862 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="7a3ff4c6-532d-43c0-ad59-0a3f57e9e761") .cluster_size' 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1906688 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1906688 00:29:52.878 1906688 00:29:52.878 15:48:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:29:53.441 9596623c-bc71-42eb-86a7-2dde00417a6f 00:29:53.441 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:53.698 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:53.954 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:53.955 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:54.232 15:48:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:54.496 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:54.496 fio-3.35 00:29:54.496 Starting 1 thread 00:29:57.050 00:29:57.050 test: (groupid=0, jobs=1): err= 0: pid=2427680: Sun Nov 3 15:48:34 2024 00:29:57.050 read: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(78.9MiB/2005msec) 00:29:57.050 slat (nsec): min=1331, max=20078, avg=1429.20, stdev=251.33 00:29:57.050 clat (usec): min=182, max=332490, avg=6309.85, stdev=18494.87 00:29:57.050 lat (usec): min=184, max=332493, avg=6311.28, stdev=18494.90 00:29:57.050 clat percentiles (msec): 00:29:57.050 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:57.051 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:57.051 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:57.051 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:29:57.051 | 99.99th=[ 334] 00:29:57.051 bw ( KiB/s): min=15184, max=48808, per=99.90%, avg=40250.00, stdev=16711.50, samples=4 00:29:57.051 iops : min= 3796, max=12202, avg=10062.50, stdev=4177.87, samples=4 00:29:57.051 write: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(78.9MiB/2005msec); 0 zone resets 00:29:57.051 slat (nsec): min=1374, max=17780, avg=1518.24, stdev=301.66 00:29:57.051 clat (usec): min=143, max=332797, avg=6276.26, stdev=17984.81 00:29:57.051 lat (usec): min=145, max=332800, avg=6277.77, stdev=17984.86 00:29:57.051 clat percentiles (msec): 00:29:57.051 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:57.051 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:29:57.051 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:29:57.051 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:29:57.051 | 99.99th=[ 334] 00:29:57.051 bw ( KiB/s): min=15888, max=48520, per=99.97%, avg=40290.00, stdev=16268.14, samples=4 00:29:57.051 iops : min= 3972, max=12130, avg=10072.50, stdev=4067.04, samples=4 00:29:57.051 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:29:57.051 lat (msec) : 2=0.03%, 4=0.30%, 10=99.31%, 500=0.32% 00:29:57.051 cpu : usr=99.35%, sys=0.15%, ctx=16, majf=0, minf=2 00:29:57.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:57.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:57.051 issued rwts: total=20195,20202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:57.051 00:29:57.051 Run status group 0 (all jobs): 00:29:57.051 READ: bw=39.3MiB/s (41.3MB/s), 39.3MiB/s-39.3MiB/s (41.3MB/s-41.3MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:29:57.051 WRITE: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:29:57.051 15:48:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:57.051 15:48:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:58.420 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d7302345-b512-4532-bbae-efe0982ba901 00:29:58.420 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d7302345-b512-4532-bbae-efe0982ba901 00:29:58.420 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=d7302345-b512-4532-bbae-efe0982ba901 00:29:58.421 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:29:58.421 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:29:58.421 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:29:58.421 15:48:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:29:58.421 { 00:29:58.421 "uuid": "7a3ff4c6-532d-43c0-ad59-0a3f57e9e761", 00:29:58.421 "name": "lvs_0", 00:29:58.421 "base_bdev": "Nvme0n1", 00:29:58.421 "total_data_clusters": 1862, 00:29:58.421 "free_clusters": 0, 00:29:58.421 "block_size": 512, 00:29:58.421 "cluster_size": 1073741824 00:29:58.421 }, 00:29:58.421 { 00:29:58.421 "uuid": "d7302345-b512-4532-bbae-efe0982ba901", 00:29:58.421 "name": "lvs_n_0", 00:29:58.421 "base_bdev": "9596623c-bc71-42eb-86a7-2dde00417a6f", 00:29:58.421 "total_data_clusters": 476206, 00:29:58.421 "free_clusters": 476206, 00:29:58.421 "block_size": 512, 00:29:58.421 "cluster_size": 4194304 00:29:58.421 } 00:29:58.421 ]' 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="d7302345-b512-4532-bbae-efe0982ba901") .free_clusters' 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=476206 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="d7302345-b512-4532-bbae-efe0982ba901") .cluster_size' 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1904824 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1904824 00:29:58.421 1904824 00:29:58.421 15:48:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:29:59.351 7cbafb64-452f-425e-845c-00788a0c711b 00:29:59.351 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:59.608 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:59.866 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:00.140 15:48:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:00.400 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:00.400 fio-3.35 00:30:00.400 Starting 1 thread 00:30:02.920 00:30:02.920 test: (groupid=0, jobs=1): err= 0: pid=2428738: Sun Nov 3 15:48:40 2024 00:30:02.920 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(79.2MiB/2006msec) 00:30:02.920 slat (nsec): min=1339, max=18397, avg=1454.01, stdev=208.97 00:30:02.920 clat (usec): min=3245, max=10999, avg=6248.90, stdev=201.61 00:30:02.920 lat (usec): min=3248, max=11000, avg=6250.35, stdev=201.58 00:30:02.920 clat percentiles (usec): 00:30:02.920 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6194], 20.00th=[ 6194], 00:30:02.920 | 30.00th=[ 6259], 40.00th=[ 6259], 50.00th=[ 6259], 60.00th=[ 6259], 00:30:02.920 | 70.00th=[ 6259], 80.00th=[ 6259], 90.00th=[ 6325], 95.00th=[ 6325], 00:30:02.920 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 9503], 00:30:02.920 | 99.99th=[10945] 00:30:02.920 bw ( KiB/s): min=39048, max=41136, per=100.00%, avg=40462.00, stdev=965.26, samples=4 00:30:02.920 iops : min= 9762, max=10284, avg=10115.50, stdev=241.32, samples=4 00:30:02.920 write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(79.3MiB/2006msec); 0 zone resets 00:30:02.920 slat (nsec): min=1370, max=17283, avg=1535.15, stdev=208.23 00:30:02.920 clat (usec): min=3251, max=11037, avg=6272.74, stdev=228.65 00:30:02.920 lat (usec): min=3256, max=11038, avg=6274.28, stdev=228.63 00:30:02.920 clat percentiles (usec): 00:30:02.920 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6194], 20.00th=[ 6259], 00:30:02.920 | 30.00th=[ 6259], 40.00th=[ 6259], 50.00th=[ 6259], 60.00th=[ 6259], 00:30:02.920 | 70.00th=[ 6259], 80.00th=[ 6325], 90.00th=[ 6325], 95.00th=[ 6325], 00:30:02.920 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[10028], 99.95th=[10159], 00:30:02.920 | 99.99th=[11076] 00:30:02.920 bw ( KiB/s): min=39440, max=40920, per=99.93%, avg=40472.00, stdev=700.26, samples=4 00:30:02.920 iops : min= 9860, max=10230, avg=10118.00, stdev=175.07, samples=4 00:30:02.920 lat (msec) : 4=0.05%, 10=99.87%, 20=0.08% 00:30:02.920 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:30:02.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:02.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.920 issued rwts: total=20283,20311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.920 00:30:02.920 Run status group 0 (all jobs): 00:30:02.920 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=79.2MiB (83.1MB), run=2006-2006msec 00:30:02.920 WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=79.3MiB (83.2MB), run=2006-2006msec 00:30:02.920 15:48:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:02.920 15:48:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:02.920 15:48:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:11.014 15:48:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:11.014 15:48:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:16.278 15:48:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:16.278 15:48:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:19.552 rmmod nvme_rdma 00:30:19.552 rmmod nvme_fabrics 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2424193 ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2424193 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2424193 ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2424193 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2424193 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2424193' 00:30:19.552 killing process with pid 2424193 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2424193 00:30:19.552 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2424193 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:19.810 00:30:19.810 real 0m50.056s 00:30:19.810 user 3m37.477s 00:30:19.810 sys 0m7.708s 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.810 ************************************ 00:30:19.810 END TEST nvmf_fio_host 00:30:19.810 ************************************ 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.810 ************************************ 00:30:19.810 START TEST nvmf_failover 00:30:19.810 ************************************ 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:19.810 * Looking for test storage... 00:30:19.810 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:19.810 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.068 --rc genhtml_branch_coverage=1 00:30:20.068 --rc genhtml_function_coverage=1 00:30:20.068 --rc genhtml_legend=1 00:30:20.068 --rc geninfo_all_blocks=1 00:30:20.068 --rc geninfo_unexecuted_blocks=1 00:30:20.068 00:30:20.068 ' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.068 --rc genhtml_branch_coverage=1 00:30:20.068 --rc genhtml_function_coverage=1 00:30:20.068 --rc genhtml_legend=1 00:30:20.068 --rc geninfo_all_blocks=1 00:30:20.068 --rc geninfo_unexecuted_blocks=1 00:30:20.068 00:30:20.068 ' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.068 --rc genhtml_branch_coverage=1 00:30:20.068 --rc genhtml_function_coverage=1 00:30:20.068 --rc genhtml_legend=1 00:30:20.068 --rc geninfo_all_blocks=1 00:30:20.068 --rc geninfo_unexecuted_blocks=1 00:30:20.068 00:30:20.068 ' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.068 --rc genhtml_branch_coverage=1 00:30:20.068 --rc genhtml_function_coverage=1 00:30:20.068 --rc genhtml_legend=1 00:30:20.068 --rc geninfo_all_blocks=1 00:30:20.068 --rc geninfo_unexecuted_blocks=1 00:30:20.068 00:30:20.068 ' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.068 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:20.069 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.069 15:48:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.624 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:26.625 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:26.625 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:26.625 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:26.625 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:26.625 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:26.884 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:26.884 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:26.884 altname enp217s0f0np0 00:30:26.884 altname ens818f0np0 00:30:26.884 inet 192.168.100.8/24 scope global mlx_0_0 00:30:26.884 valid_lft forever preferred_lft forever 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:26.884 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:26.884 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:26.884 altname enp217s0f1np1 00:30:26.884 altname ens818f1np1 00:30:26.884 inet 192.168.100.9/24 scope global mlx_0_1 00:30:26.884 valid_lft forever preferred_lft forever 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.884 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:26.885 192.168.100.9' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:26.885 192.168.100.9' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:26.885 192.168.100.9' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2435279 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2435279 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2435279 ']' 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:26.885 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.885 [2024-11-03 15:49:04.669175] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:30:26.885 [2024-11-03 15:49:04.669229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.143 [2024-11-03 15:49:04.745534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:27.143 [2024-11-03 15:49:04.767111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.143 [2024-11-03 15:49:04.767152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.143 [2024-11-03 15:49:04.767162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.143 [2024-11-03 15:49:04.767170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.143 [2024-11-03 15:49:04.767178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.143 [2024-11-03 15:49:04.768770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.143 [2024-11-03 15:49:04.768858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.143 [2024-11-03 15:49:04.768860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.143 15:49:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:27.401 [2024-11-03 15:49:05.088959] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xce03d0/0xce4880) succeed. 00:30:27.401 [2024-11-03 15:49:05.097821] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xce1970/0xd25f20) succeed. 00:30:27.658 15:49:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:27.658 Malloc0 00:30:27.658 15:49:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.915 15:49:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.172 15:49:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:28.429 [2024-11-03 15:49:05.995172] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:28.429 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:28.429 [2024-11-03 15:49:06.199586] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:28.686 [2024-11-03 15:49:06.400285] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2435632 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2435632 /var/tmp/bdevperf.sock 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2435632 ']' 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:28.686 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.943 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:28.943 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:30:28.943 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:29.200 NVMe0n1 00:30:29.201 15:49:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:29.458 00:30:29.458 15:49:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:29.458 15:49:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2435649 00:30:29.458 15:49:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:30.828 15:49:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:30.828 15:49:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:34.105 15:49:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:34.105 00:30:34.105 15:49:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:34.105 15:49:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:37.381 15:49:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:37.381 [2024-11-03 15:49:15.054827] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:37.381 15:49:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:38.312 15:49:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:38.569 15:49:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2435649 00:30:45.140 { 00:30:45.140 "results": [ 00:30:45.140 { 00:30:45.140 "job": "NVMe0n1", 00:30:45.140 "core_mask": "0x1", 00:30:45.140 "workload": "verify", 00:30:45.140 "status": "finished", 00:30:45.140 "verify_range": { 00:30:45.140 "start": 0, 00:30:45.140 "length": 16384 00:30:45.140 }, 00:30:45.140 "queue_depth": 128, 00:30:45.140 "io_size": 4096, 00:30:45.140 "runtime": 15.005713, 00:30:45.140 "iops": 14471.155086066221, 00:30:45.140 "mibps": 56.527949554946176, 00:30:45.140 "io_failed": 4828, 00:30:45.140 "io_timeout": 0, 00:30:45.140 "avg_latency_us": 8630.861827099983, 00:30:45.140 "min_latency_us": 340.7872, 00:30:45.140 "max_latency_us": 1020054.7328 00:30:45.140 } 00:30:45.140 ], 00:30:45.140 "core_count": 1 00:30:45.140 } 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2435632 ']' 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2435632' 00:30:45.140 killing process with pid 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2435632 00:30:45.140 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:45.140 [2024-11-03 15:49:06.475311] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:30:45.140 [2024-11-03 15:49:06.475372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435632 ] 00:30:45.140 [2024-11-03 15:49:06.554348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.140 [2024-11-03 15:49:06.576899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.140 Running I/O for 15 seconds... 00:30:45.140 18183.00 IOPS, 71.03 MiB/s [2024-11-03T14:49:22.930Z] 10004.50 IOPS, 39.08 MiB/s [2024-11-03T14:49:22.930Z] [2024-11-03 15:49:09.403921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.403960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.403996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181400 00:30:45.140 [2024-11-03 15:49:09.404260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.140 [2024-11-03 15:49:09.404271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.141 [2024-11-03 15:49:09.404972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181400 00:30:45.141 [2024-11-03 15:49:09.404982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.404992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.142 [2024-11-03 15:49:09.405690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181400 00:30:45.142 [2024-11-03 15:49:09.405698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.405979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.405990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.143 [2024-11-03 15:49:09.406413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.143 [2024-11-03 15:49:09.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.406433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.144 [2024-11-03 15:49:09.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.406453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.144 [2024-11-03 15:49:09.406462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.406474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.144 [2024-11-03 15:49:09.406483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.406493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.144 [2024-11-03 15:49:09.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52169 cdw0:65ca1000 sqhd:8ef4 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.408316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.144 [2024-11-03 15:49:09.408329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.144 [2024-11-03 15:49:09.408337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30016 len:8 PRP1 0x0 PRP2 0x0 00:30:45.144 [2024-11-03 15:49:09.408346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:09.408389] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:45.144 [2024-11-03 15:49:09.408400] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:45.144 [2024-11-03 15:49:09.411174] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:45.144 [2024-11-03 15:49:09.425917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:45.144 [2024-11-03 15:49:09.474613] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:45.144 11700.00 IOPS, 45.70 MiB/s [2024-11-03T14:49:22.934Z] 13346.25 IOPS, 52.13 MiB/s [2024-11-03T14:49:22.934Z] 12674.00 IOPS, 49.51 MiB/s [2024-11-03T14:49:22.934Z] [2024-11-03 15:49:12.868884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.868923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.868942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.868952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.868964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.868979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.868990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.144 [2024-11-03 15:49:12.869360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182d00 00:30:45.144 [2024-11-03 15:49:12.869369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182d00 00:30:45.145 [2024-11-03 15:49:12.869941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.869986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.870005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.870016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.870025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.870036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.870045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.870055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.145 [2024-11-03 15:49:12.870075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.145 [2024-11-03 15:49:12.870084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182d00 00:30:45.146 [2024-11-03 15:49:12.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.146 [2024-11-03 15:49:12.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.146 [2024-11-03 15:49:12.870817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.870989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.870998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:12.871370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182d00 00:30:45.147 [2024-11-03 15:49:12.871390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182d00 00:30:45.147 [2024-11-03 15:49:12.871409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182d00 00:30:45.147 [2024-11-03 15:49:12.871429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.871440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182d00 00:30:45.147 [2024-11-03 15:49:12.871450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52171 cdw0:65ca1000 sqhd:9e70 p:1 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.873374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.147 [2024-11-03 15:49:12.873387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.147 [2024-11-03 15:49:12.873395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128768 len:8 PRP1 0x0 PRP2 0x0 00:30:45.147 [2024-11-03 15:49:12.873404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.147 [2024-11-03 15:49:12.873446] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:30:45.147 [2024-11-03 15:49:12.873457] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:45.147 [2024-11-03 15:49:12.876207] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:45.147 [2024-11-03 15:49:12.890434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:30:45.147 [2024-11-03 15:49:12.934099] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:45.147 11710.33 IOPS, 45.74 MiB/s [2024-11-03T14:49:22.937Z] 12682.86 IOPS, 49.54 MiB/s [2024-11-03T14:49:22.937Z] 13412.25 IOPS, 52.39 MiB/s [2024-11-03T14:49:22.937Z] 13866.11 IOPS, 54.16 MiB/s [2024-11-03T14:49:22.937Z] [2024-11-03 15:49:17.271123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.147 [2024-11-03 15:49:17.271162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181400 00:30:45.148 [2024-11-03 15:49:17.271801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.148 [2024-11-03 15:49:17.271821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.148 [2024-11-03 15:49:17.271840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.148 [2024-11-03 15:49:17.271859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.148 [2024-11-03 15:49:17.271870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.148 [2024-11-03 15:49:17.271880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.271899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.271919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.271939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.271958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.271982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.271993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.149 [2024-11-03 15:49:17.272365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181400 00:30:45.149 [2024-11-03 15:49:17.272461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.149 [2024-11-03 15:49:17.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.150 [2024-11-03 15:49:17.272849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.272987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.272999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181400 00:30:45.150 [2024-11-03 15:49:17.273162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.150 [2024-11-03 15:49:17.273172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.150 [2024-11-03 15:49:17.273180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.273611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.151 [2024-11-03 15:49:17.273619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52174 cdw0:65ca1000 sqhd:e3c4 p:1 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.275392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.151 [2024-11-03 15:49:17.275404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.151 [2024-11-03 15:49:17.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107776 len:8 PRP1 0x0 PRP2 0x0 00:30:45.151 [2024-11-03 15:49:17.275422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.151 [2024-11-03 15:49:17.275464] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:30:45.151 [2024-11-03 15:49:17.275474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:45.151 [2024-11-03 15:49:17.278246] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:45.151 [2024-11-03 15:49:17.292082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:30:45.151 12479.50 IOPS, 48.75 MiB/s [2024-11-03T14:49:22.941Z] [2024-11-03 15:49:17.338040] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:45.151 12999.91 IOPS, 50.78 MiB/s [2024-11-03T14:49:22.941Z] 13460.25 IOPS, 52.58 MiB/s [2024-11-03T14:49:22.941Z] 13847.92 IOPS, 54.09 MiB/s [2024-11-03T14:49:22.941Z] 14180.36 IOPS, 55.39 MiB/s [2024-11-03T14:49:22.941Z] 14471.13 IOPS, 56.53 MiB/s 00:30:45.151 Latency(us) 00:30:45.151 [2024-11-03T14:49:22.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.151 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:45.151 Verification LBA range: start 0x0 length 0x4000 00:30:45.151 NVMe0n1 : 15.01 14471.16 56.53 321.74 0.00 8630.86 340.79 1020054.73 00:30:45.151 [2024-11-03T14:49:22.941Z] =================================================================================================================== 00:30:45.151 [2024-11-03T14:49:22.941Z] Total : 14471.16 56.53 321.74 0.00 8630.86 340.79 1020054.73 00:30:45.151 Received shutdown signal, test time was about 15.000000 seconds 00:30:45.151 00:30:45.151 Latency(us) 00:30:45.151 [2024-11-03T14:49:22.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.151 [2024-11-03T14:49:22.941Z] =================================================================================================================== 00:30:45.151 [2024-11-03T14:49:22.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2438290 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2438290 /var/tmp/bdevperf.sock 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2438290 ']' 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:30:45.151 15:49:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:45.453 [2024-11-03 15:49:23.017728] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:45.453 15:49:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:45.453 [2024-11-03 15:49:23.214361] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:45.728 15:49:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:45.728 NVMe0n1 00:30:45.986 15:49:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:45.986 00:30:46.244 15:49:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:46.244 00:30:46.502 15:49:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:46.502 15:49:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:46.502 15:49:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.760 15:49:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:50.041 15:49:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.041 15:49:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:50.041 15:49:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2439107 00:30:50.041 15:49:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:50.041 15:49:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2439107 00:30:50.974 { 00:30:50.974 "results": [ 00:30:50.974 { 00:30:50.974 "job": "NVMe0n1", 00:30:50.974 "core_mask": "0x1", 00:30:50.974 "workload": "verify", 00:30:50.974 "status": "finished", 00:30:50.974 "verify_range": { 00:30:50.974 "start": 0, 00:30:50.974 "length": 16384 00:30:50.975 }, 00:30:50.975 "queue_depth": 128, 00:30:50.975 "io_size": 4096, 00:30:50.975 "runtime": 1.007621, 00:30:50.975 "iops": 18214.189660596592, 00:30:50.975 "mibps": 71.14917836170544, 00:30:50.975 "io_failed": 0, 00:30:50.975 "io_timeout": 0, 00:30:50.975 "avg_latency_us": 6986.342745927097, 00:30:50.975 "min_latency_us": 1153.4336, 00:30:50.975 "max_latency_us": 15099.4944 00:30:50.975 } 00:30:50.975 ], 00:30:50.975 "core_count": 1 00:30:50.975 } 00:30:51.233 15:49:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:51.233 [2024-11-03 15:49:22.657780] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:30:51.233 [2024-11-03 15:49:22.657841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438290 ] 00:30:51.233 [2024-11-03 15:49:22.737057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.233 [2024-11-03 15:49:22.756284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.233 [2024-11-03 15:49:24.419590] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:51.233 [2024-11-03 15:49:24.420255] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:51.233 [2024-11-03 15:49:24.420288] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:51.233 [2024-11-03 15:49:24.444855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:30:51.233 [2024-11-03 15:49:24.462410] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:51.233 Running I/O for 1 seconds... 00:30:51.233 18176.00 IOPS, 71.00 MiB/s 00:30:51.233 Latency(us) 00:30:51.233 [2024-11-03T14:49:29.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.233 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:51.233 Verification LBA range: start 0x0 length 0x4000 00:30:51.233 NVMe0n1 : 1.01 18214.19 71.15 0.00 0.00 6986.34 1153.43 15099.49 00:30:51.233 [2024-11-03T14:49:29.023Z] =================================================================================================================== 00:30:51.233 [2024-11-03T14:49:29.023Z] Total : 18214.19 71.15 0.00 0.00 6986.34 1153.43 15099.49 00:30:51.233 15:49:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.233 15:49:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:51.233 15:49:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.491 15:49:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.491 15:49:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:51.748 15:49:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.006 15:49:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2438290 ']' 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2438290' 00:30:55.286 killing process with pid 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2438290 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:55.286 15:49:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:55.545 rmmod nvme_rdma 00:30:55.545 rmmod nvme_fabrics 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2435279 ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2435279 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2435279 ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2435279 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2435279 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2435279' 00:30:55.545 killing process with pid 2435279 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2435279 00:30:55.545 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2435279 00:30:55.803 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.803 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:55.803 00:30:55.803 real 0m36.076s 00:30:55.803 user 1m58.652s 00:30:55.803 sys 0m7.550s 00:30:55.803 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:55.803 15:49:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:55.803 ************************************ 00:30:55.803 END TEST nvmf_failover 00:30:55.803 ************************************ 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.062 ************************************ 00:30:56.062 START TEST nvmf_host_discovery 00:30:56.062 ************************************ 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:56.062 * Looking for test storage... 00:30:56.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:56.062 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.063 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:56.063 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:56.063 00:30:56.063 real 0m0.223s 00:30:56.063 user 0m0.126s 00:30:56.063 sys 0m0.110s 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:56.063 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.063 ************************************ 00:30:56.063 END TEST nvmf_host_discovery 00:30:56.063 ************************************ 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.322 ************************************ 00:30:56.322 START TEST nvmf_host_multipath_status 00:30:56.322 ************************************ 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:56.322 * Looking for test storage... 00:30:56.322 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:56.322 15:49:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.322 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.323 --rc genhtml_branch_coverage=1 00:30:56.323 --rc genhtml_function_coverage=1 00:30:56.323 --rc genhtml_legend=1 00:30:56.323 --rc geninfo_all_blocks=1 00:30:56.323 --rc geninfo_unexecuted_blocks=1 00:30:56.323 00:30:56.323 ' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.323 --rc genhtml_branch_coverage=1 00:30:56.323 --rc genhtml_function_coverage=1 00:30:56.323 --rc genhtml_legend=1 00:30:56.323 --rc geninfo_all_blocks=1 00:30:56.323 --rc geninfo_unexecuted_blocks=1 00:30:56.323 00:30:56.323 ' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.323 --rc genhtml_branch_coverage=1 00:30:56.323 --rc genhtml_function_coverage=1 00:30:56.323 --rc genhtml_legend=1 00:30:56.323 --rc geninfo_all_blocks=1 00:30:56.323 --rc geninfo_unexecuted_blocks=1 00:30:56.323 00:30:56.323 ' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.323 --rc genhtml_branch_coverage=1 00:30:56.323 --rc genhtml_function_coverage=1 00:30:56.323 --rc genhtml_legend=1 00:30:56.323 --rc geninfo_all_blocks=1 00:30:56.323 --rc geninfo_unexecuted_blocks=1 00:30:56.323 00:30:56.323 ' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.323 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.323 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.324 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.324 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.582 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.582 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.582 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.582 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.582 15:49:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:03.147 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:03.147 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:03.147 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:03.147 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:03.148 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:03.148 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:03.148 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:03.148 altname enp217s0f0np0 00:31:03.148 altname ens818f0np0 00:31:03.148 inet 192.168.100.8/24 scope global mlx_0_0 00:31:03.148 valid_lft forever preferred_lft forever 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:03.148 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:03.148 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:03.148 altname enp217s0f1np1 00:31:03.148 altname ens818f1np1 00:31:03.148 inet 192.168.100.9/24 scope global mlx_0_1 00:31:03.148 valid_lft forever preferred_lft forever 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:03.148 192.168.100.9' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:03.148 192.168.100.9' 00:31:03.148 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:03.149 192.168.100.9' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2443403 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2443403 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2443403 ']' 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:03.149 15:49:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:03.149 [2024-11-03 15:49:40.908332] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:31:03.149 [2024-11-03 15:49:40.908384] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.407 [2024-11-03 15:49:40.985191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:03.407 [2024-11-03 15:49:41.006445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.407 [2024-11-03 15:49:41.006485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.407 [2024-11-03 15:49:41.006495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.407 [2024-11-03 15:49:41.006503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.407 [2024-11-03 15:49:41.006526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.407 [2024-11-03 15:49:41.007829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.407 [2024-11-03 15:49:41.007833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2443403 00:31:03.407 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:03.665 [2024-11-03 15:49:41.334776] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7ca590/0x7cea40) succeed. 00:31:03.665 [2024-11-03 15:49:41.343610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7cba90/0x8100e0) succeed. 00:31:03.665 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:03.924 Malloc0 00:31:03.924 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:04.181 15:49:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:04.439 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:04.439 [2024-11-03 15:49:42.174907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:04.439 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:04.698 [2024-11-03 15:49:42.347157] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2443694 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2443694 /var/tmp/bdevperf.sock 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2443694 ']' 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:04.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:04.698 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:04.956 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:04.956 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:31:04.956 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:05.214 15:49:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:05.472 Nvme0n1 00:31:05.472 15:49:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:05.730 Nvme0n1 00:31:05.730 15:49:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:05.730 15:49:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:07.624 15:49:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:07.624 15:49:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:07.881 15:49:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:07.881 15:49:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.251 15:49:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.251 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.251 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.251 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.251 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.508 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.508 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.508 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.508 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.766 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.766 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.766 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.766 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:10.023 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:10.280 15:49:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:10.537 15:49:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:11.468 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:11.468 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:11.468 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.468 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.725 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.725 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.725 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.725 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.992 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.251 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.251 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.251 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.251 15:49:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.509 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.509 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.509 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.509 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.768 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.768 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:12.768 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:12.768 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:13.025 15:49:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.481 15:49:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.481 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.481 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.481 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.481 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.747 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.004 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.004 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.004 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.004 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.261 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.261 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:15.261 15:49:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:15.519 15:49:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:15.519 15:49:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:16.890 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:16.890 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:16.890 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.891 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:16.891 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.891 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:16.891 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.891 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:17.148 15:49:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.406 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.406 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:17.406 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.406 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:17.664 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:17.921 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:18.178 15:49:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:19.110 15:49:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:19.110 15:49:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:19.110 15:49:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.110 15:49:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:19.367 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.367 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:19.367 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.367 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:19.624 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.624 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:19.624 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.624 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.882 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:20.139 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.139 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:20.139 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.139 15:49:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:20.396 15:49:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.396 15:49:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:20.396 15:49:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:20.653 15:49:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:20.653 15:49:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.031 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.291 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.291 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.291 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.291 15:49:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.549 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.549 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:22.549 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.549 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.807 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:23.065 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:23.065 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:23.324 15:50:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:23.324 15:50:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.698 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.956 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.215 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.215 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:25.215 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.215 15:50:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.473 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.473 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.473 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.473 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:25.731 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.731 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:25.731 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:25.731 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:25.989 15:50:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:26.924 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:26.924 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:26.924 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.924 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.183 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.183 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.183 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.183 15:50:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.441 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.441 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.441 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.441 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.700 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.959 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.959 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:27.959 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.959 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.217 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.217 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:28.217 15:50:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:28.475 15:50:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:28.475 15:50:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.850 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.108 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.108 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.108 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.108 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:30.367 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.367 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:30.367 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.367 15:50:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:30.625 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:30.883 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:31.142 15:50:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:32.077 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:32.077 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:32.077 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.077 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:32.335 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.335 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:32.335 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.335 15:50:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.593 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.851 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.851 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.851 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.851 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.109 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.109 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:33.109 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.109 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2443694 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2443694 ']' 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2443694 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:33.368 15:50:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2443694 00:31:33.368 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:31:33.368 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:31:33.368 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2443694' 00:31:33.368 killing process with pid 2443694 00:31:33.368 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2443694 00:31:33.368 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2443694 00:31:33.368 { 00:31:33.368 "results": [ 00:31:33.368 { 00:31:33.368 "job": "Nvme0n1", 00:31:33.368 "core_mask": "0x4", 00:31:33.368 "workload": "verify", 00:31:33.368 "status": "terminated", 00:31:33.368 "verify_range": { 00:31:33.368 "start": 0, 00:31:33.368 "length": 16384 00:31:33.368 }, 00:31:33.368 "queue_depth": 128, 00:31:33.368 "io_size": 4096, 00:31:33.368 "runtime": 27.589037, 00:31:33.368 "iops": 16123.72334706717, 00:31:33.368 "mibps": 62.983294324481136, 00:31:33.368 "io_failed": 0, 00:31:33.368 "io_timeout": 0, 00:31:33.368 "avg_latency_us": 7919.555823961083, 00:31:33.368 "min_latency_us": 52.8384, 00:31:33.368 "max_latency_us": 3019898.88 00:31:33.368 } 00:31:33.368 ], 00:31:33.368 "core_count": 1 00:31:33.368 } 00:31:33.632 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2443694 00:31:33.632 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:33.632 [2024-11-03 15:49:42.392729] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:31:33.632 [2024-11-03 15:49:42.392791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2443694 ] 00:31:33.632 [2024-11-03 15:49:42.468016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.632 [2024-11-03 15:49:42.490914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.632 Running I/O for 90 seconds... 00:31:33.632 18816.00 IOPS, 73.50 MiB/s [2024-11-03T14:50:11.422Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-03T14:50:11.422Z] 18901.33 IOPS, 73.83 MiB/s [2024-11-03T14:50:11.422Z] 18897.00 IOPS, 73.82 MiB/s [2024-11-03T14:50:11.422Z] 18868.00 IOPS, 73.70 MiB/s [2024-11-03T14:50:11.422Z] 18918.33 IOPS, 73.90 MiB/s [2024-11-03T14:50:11.422Z] 18872.43 IOPS, 73.72 MiB/s [2024-11-03T14:50:11.422Z] 18860.38 IOPS, 73.67 MiB/s [2024-11-03T14:50:11.422Z] 18844.89 IOPS, 73.61 MiB/s [2024-11-03T14:50:11.422Z] 18829.10 IOPS, 73.55 MiB/s [2024-11-03T14:50:11.422Z] 18817.27 IOPS, 73.50 MiB/s [2024-11-03T14:50:11.422Z] 18806.08 IOPS, 73.46 MiB/s [2024-11-03T14:50:11.422Z] [2024-11-03 15:49:55.615728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183400 00:31:33.632 [2024-11-03 15:49:55.615860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.615979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.615994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.632 [2024-11-03 15:49:55.616264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:33.632 [2024-11-03 15:49:55.616275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183400 00:31:33.633 [2024-11-03 15:49:55.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183400 00:31:33.633 [2024-11-03 15:49:55.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:31:33.633 [2024-11-03 15:49:55.616468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183400 00:31:33.633 [2024-11-03 15:49:55.616491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183400 00:31:33.633 [2024-11-03 15:49:55.616512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:33.633 [2024-11-03 15:49:55.616975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.633 [2024-11-03 15:49:55.616984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183400 00:31:33.634 [2024-11-03 15:49:55.617254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183400 00:31:33.634 [2024-11-03 15:49:55.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.634 [2024-11-03 15:49:55.617659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:33.634 [2024-11-03 15:49:55.617670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.617843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.635 [2024-11-03 15:49:55.617904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.617924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.617945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.617965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.617990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183400 00:31:33.635 [2024-11-03 15:49:55.618322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:33.635 [2024-11-03 15:49:55.618333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:49:55.618342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:49:55.618353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:49:55.618362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:49:55.618374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:49:55.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:49:55.618394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:49:55.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:49:55.618679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:49:55.618692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:33.636 17674.08 IOPS, 69.04 MiB/s [2024-11-03T14:50:11.426Z] 16411.64 IOPS, 64.11 MiB/s [2024-11-03T14:50:11.426Z] 15317.53 IOPS, 59.83 MiB/s [2024-11-03T14:50:11.426Z] 15290.62 IOPS, 59.73 MiB/s [2024-11-03T14:50:11.426Z] 15492.29 IOPS, 60.52 MiB/s [2024-11-03T14:50:11.426Z] 15579.89 IOPS, 60.86 MiB/s [2024-11-03T14:50:11.426Z] 15568.84 IOPS, 60.82 MiB/s [2024-11-03T14:50:11.426Z] 15557.90 IOPS, 60.77 MiB/s [2024-11-03T14:50:11.426Z] 15721.43 IOPS, 61.41 MiB/s [2024-11-03T14:50:11.426Z] 15874.23 IOPS, 62.01 MiB/s [2024-11-03T14:50:11.426Z] 15965.43 IOPS, 62.36 MiB/s [2024-11-03T14:50:11.426Z] 15936.04 IOPS, 62.25 MiB/s [2024-11-03T14:50:11.426Z] 15904.32 IOPS, 62.13 MiB/s [2024-11-03T14:50:11.426Z] [2024-11-03 15:50:08.731038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.731971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.731986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.731995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.732016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.732036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.732057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.732099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.732122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.636 [2024-11-03 15:50:08.732142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:31:33.636 [2024-11-03 15:50:08.732184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:33.636 [2024-11-03 15:50:08.732195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.637 [2024-11-03 15:50:08.732822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183400 00:31:33.637 [2024-11-03 15:50:08.732864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:33.637 [2024-11-03 15:50:08.732876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183400 00:31:33.638 [2024-11-03 15:50:08.732885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.732896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.638 [2024-11-03 15:50:08.732905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.732917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.638 [2024-11-03 15:50:08.732928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.732939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183400 00:31:33.638 [2024-11-03 15:50:08.732949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.732960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.638 [2024-11-03 15:50:08.732977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.732989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183400 00:31:33.638 [2024-11-03 15:50:08.732998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.733010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:33.638 [2024-11-03 15:50:08.733019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:33.638 [2024-11-03 15:50:08.733031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183400 00:31:33.638 [2024-11-03 15:50:08.733040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:33.638 15970.31 IOPS, 62.38 MiB/s [2024-11-03T14:50:11.428Z] 16071.07 IOPS, 62.78 MiB/s [2024-11-03T14:50:11.428Z] Received shutdown signal, test time was about 27.589657 seconds 00:31:33.638 00:31:33.638 Latency(us) 00:31:33.638 [2024-11-03T14:50:11.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.638 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:33.638 Verification LBA range: start 0x0 length 0x4000 00:31:33.638 Nvme0n1 : 27.59 16123.72 62.98 0.00 0.00 7919.56 52.84 3019898.88 00:31:33.638 [2024-11-03T14:50:11.428Z] =================================================================================================================== 00:31:33.638 [2024-11-03T14:50:11.428Z] Total : 16123.72 62.98 0.00 0.00 7919.56 52.84 3019898.88 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.638 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:33.638 rmmod nvme_rdma 00:31:33.638 rmmod nvme_fabrics 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2443403 ']' 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2443403 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2443403 ']' 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2443403 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2443403 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2443403' 00:31:33.910 killing process with pid 2443403 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2443403 00:31:33.910 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2443403 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:34.176 00:31:34.176 real 0m37.817s 00:31:34.176 user 1m47.573s 00:31:34.176 sys 0m9.092s 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:34.176 ************************************ 00:31:34.176 END TEST nvmf_host_multipath_status 00:31:34.176 ************************************ 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.176 ************************************ 00:31:34.176 START TEST nvmf_discovery_remove_ifc 00:31:34.176 ************************************ 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:34.176 * Looking for test storage... 00:31:34.176 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:31:34.176 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.436 15:50:11 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.436 --rc genhtml_branch_coverage=1 00:31:34.436 --rc genhtml_function_coverage=1 00:31:34.436 --rc genhtml_legend=1 00:31:34.436 --rc geninfo_all_blocks=1 00:31:34.436 --rc geninfo_unexecuted_blocks=1 00:31:34.436 00:31:34.436 ' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.436 --rc genhtml_branch_coverage=1 00:31:34.436 --rc genhtml_function_coverage=1 00:31:34.436 --rc genhtml_legend=1 00:31:34.436 --rc geninfo_all_blocks=1 00:31:34.436 --rc geninfo_unexecuted_blocks=1 00:31:34.436 00:31:34.436 ' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.436 --rc genhtml_branch_coverage=1 00:31:34.436 --rc genhtml_function_coverage=1 00:31:34.436 --rc genhtml_legend=1 00:31:34.436 --rc geninfo_all_blocks=1 00:31:34.436 --rc geninfo_unexecuted_blocks=1 00:31:34.436 00:31:34.436 ' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.436 --rc genhtml_branch_coverage=1 00:31:34.436 --rc genhtml_function_coverage=1 00:31:34.436 --rc genhtml_legend=1 00:31:34.436 --rc geninfo_all_blocks=1 00:31:34.436 --rc geninfo_unexecuted_blocks=1 00:31:34.436 00:31:34.436 ' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.436 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:31:34.436 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:34.437 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:31:34.437 00:31:34.437 real 0m0.238s 00:31:34.437 user 0m0.134s 00:31:34.437 sys 0m0.118s 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.437 ************************************ 00:31:34.437 END TEST nvmf_discovery_remove_ifc 00:31:34.437 ************************************ 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.437 ************************************ 00:31:34.437 START TEST nvmf_identify_kernel_target 00:31:34.437 ************************************ 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:34.437 * Looking for test storage... 00:31:34.437 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:34.437 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:34.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.696 --rc genhtml_branch_coverage=1 00:31:34.696 --rc genhtml_function_coverage=1 00:31:34.696 --rc genhtml_legend=1 00:31:34.696 --rc geninfo_all_blocks=1 00:31:34.696 --rc geninfo_unexecuted_blocks=1 00:31:34.696 00:31:34.696 ' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:34.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.696 --rc genhtml_branch_coverage=1 00:31:34.696 --rc genhtml_function_coverage=1 00:31:34.696 --rc genhtml_legend=1 00:31:34.696 --rc geninfo_all_blocks=1 00:31:34.696 --rc geninfo_unexecuted_blocks=1 00:31:34.696 00:31:34.696 ' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:34.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.696 --rc genhtml_branch_coverage=1 00:31:34.696 --rc genhtml_function_coverage=1 00:31:34.696 --rc genhtml_legend=1 00:31:34.696 --rc geninfo_all_blocks=1 00:31:34.696 --rc geninfo_unexecuted_blocks=1 00:31:34.696 00:31:34.696 ' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:34.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.696 --rc genhtml_branch_coverage=1 00:31:34.696 --rc genhtml_function_coverage=1 00:31:34.696 --rc genhtml_legend=1 00:31:34.696 --rc geninfo_all_blocks=1 00:31:34.696 --rc geninfo_unexecuted_blocks=1 00:31:34.696 00:31:34.696 ' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:34.696 15:50:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.262 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.262 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:41.263 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:41.263 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:41.263 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:41.263 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:41.263 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:41.264 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:41.264 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:41.264 altname enp217s0f0np0 00:31:41.264 altname ens818f0np0 00:31:41.264 inet 192.168.100.8/24 scope global mlx_0_0 00:31:41.264 valid_lft forever preferred_lft forever 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:41.264 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:41.264 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:41.264 altname enp217s0f1np1 00:31:41.264 altname ens818f1np1 00:31:41.264 inet 192.168.100.9/24 scope global mlx_0_1 00:31:41.264 valid_lft forever preferred_lft forever 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:41.264 15:50:18 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:41.264 192.168.100.9' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:41.264 192.168.100.9' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:41.264 192.168.100.9' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:41.264 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:41.523 15:50:19 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:44.804 Waiting for block devices as requested 00:31:44.804 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:44.804 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:44.804 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:44.804 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:44.804 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:45.106 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:45.106 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:45.106 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:45.106 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:45.388 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:45.388 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:45.388 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:45.388 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:45.647 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:45.647 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:45.647 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:45.905 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:45.905 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:45.905 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:45.905 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:45.905 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:45.905 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:45.906 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:45.906 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:45.906 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:45.906 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:46.165 No valid GPT data, bailing 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:31:46.165 00:31:46.165 Discovery Log Number of Records 2, Generation counter 2 00:31:46.165 =====Discovery Log Entry 0====== 00:31:46.165 trtype: rdma 00:31:46.165 adrfam: ipv4 00:31:46.165 subtype: current discovery subsystem 00:31:46.165 treq: not specified, sq flow control disable supported 00:31:46.165 portid: 1 00:31:46.165 trsvcid: 4420 00:31:46.165 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:46.165 traddr: 192.168.100.8 00:31:46.165 eflags: none 00:31:46.165 rdma_prtype: not specified 00:31:46.165 rdma_qptype: connected 00:31:46.165 rdma_cms: rdma-cm 00:31:46.165 rdma_pkey: 0x0000 00:31:46.165 =====Discovery Log Entry 1====== 00:31:46.165 trtype: rdma 00:31:46.165 adrfam: ipv4 00:31:46.165 subtype: nvme subsystem 00:31:46.165 treq: not specified, sq flow control disable supported 00:31:46.165 portid: 1 00:31:46.165 trsvcid: 4420 00:31:46.165 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:46.165 traddr: 192.168.100.8 00:31:46.165 eflags: none 00:31:46.165 rdma_prtype: not specified 00:31:46.165 rdma_qptype: connected 00:31:46.165 rdma_cms: rdma-cm 00:31:46.165 rdma_pkey: 0x0000 00:31:46.165 15:50:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:31:46.165 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:46.424 ===================================================== 00:31:46.424 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:46.424 ===================================================== 00:31:46.424 Controller Capabilities/Features 00:31:46.424 ================================ 00:31:46.424 Vendor ID: 0000 00:31:46.424 Subsystem Vendor ID: 0000 00:31:46.424 Serial Number: 391a9982da06192ecc1e 00:31:46.424 Model Number: Linux 00:31:46.424 Firmware Version: 6.8.9-20 00:31:46.424 Recommended Arb Burst: 0 00:31:46.424 IEEE OUI Identifier: 00 00 00 00:31:46.424 Multi-path I/O 00:31:46.424 May have multiple subsystem ports: No 00:31:46.424 May have multiple controllers: No 00:31:46.424 Associated with SR-IOV VF: No 00:31:46.424 Max Data Transfer Size: Unlimited 00:31:46.424 Max Number of Namespaces: 0 00:31:46.424 Max Number of I/O Queues: 1024 00:31:46.424 NVMe Specification Version (VS): 1.3 00:31:46.424 NVMe Specification Version (Identify): 1.3 00:31:46.424 Maximum Queue Entries: 128 00:31:46.424 Contiguous Queues Required: No 00:31:46.424 Arbitration Mechanisms Supported 00:31:46.424 Weighted Round Robin: Not Supported 00:31:46.424 Vendor Specific: Not Supported 00:31:46.424 Reset Timeout: 7500 ms 00:31:46.424 Doorbell Stride: 4 bytes 00:31:46.424 NVM Subsystem Reset: Not Supported 00:31:46.424 Command Sets Supported 00:31:46.424 NVM Command Set: Supported 00:31:46.424 Boot Partition: Not Supported 00:31:46.424 Memory Page Size Minimum: 4096 bytes 00:31:46.424 Memory Page Size Maximum: 4096 bytes 00:31:46.424 Persistent Memory Region: Not Supported 00:31:46.424 Optional Asynchronous Events Supported 00:31:46.424 Namespace Attribute Notices: Not Supported 00:31:46.424 Firmware Activation Notices: Not Supported 00:31:46.424 ANA Change Notices: Not Supported 00:31:46.424 PLE Aggregate Log Change Notices: Not Supported 00:31:46.424 LBA Status Info Alert Notices: Not Supported 00:31:46.424 EGE Aggregate Log Change Notices: Not Supported 00:31:46.424 Normal NVM Subsystem Shutdown event: Not Supported 00:31:46.424 Zone Descriptor Change Notices: Not Supported 00:31:46.424 Discovery Log Change Notices: Supported 00:31:46.424 Controller Attributes 00:31:46.424 128-bit Host Identifier: Not Supported 00:31:46.424 Non-Operational Permissive Mode: Not Supported 00:31:46.424 NVM Sets: Not Supported 00:31:46.424 Read Recovery Levels: Not Supported 00:31:46.424 Endurance Groups: Not Supported 00:31:46.424 Predictable Latency Mode: Not Supported 00:31:46.424 Traffic Based Keep ALive: Not Supported 00:31:46.424 Namespace Granularity: Not Supported 00:31:46.424 SQ Associations: Not Supported 00:31:46.424 UUID List: Not Supported 00:31:46.424 Multi-Domain Subsystem: Not Supported 00:31:46.424 Fixed Capacity Management: Not Supported 00:31:46.424 Variable Capacity Management: Not Supported 00:31:46.424 Delete Endurance Group: Not Supported 00:31:46.424 Delete NVM Set: Not Supported 00:31:46.424 Extended LBA Formats Supported: Not Supported 00:31:46.424 Flexible Data Placement Supported: Not Supported 00:31:46.424 00:31:46.424 Controller Memory Buffer Support 00:31:46.424 ================================ 00:31:46.424 Supported: No 00:31:46.424 00:31:46.424 Persistent Memory Region Support 00:31:46.424 ================================ 00:31:46.424 Supported: No 00:31:46.424 00:31:46.424 Admin Command Set Attributes 00:31:46.424 ============================ 00:31:46.424 Security Send/Receive: Not Supported 00:31:46.424 Format NVM: Not Supported 00:31:46.424 Firmware Activate/Download: Not Supported 00:31:46.424 Namespace Management: Not Supported 00:31:46.424 Device Self-Test: Not Supported 00:31:46.424 Directives: Not Supported 00:31:46.424 NVMe-MI: Not Supported 00:31:46.424 Virtualization Management: Not Supported 00:31:46.424 Doorbell Buffer Config: Not Supported 00:31:46.424 Get LBA Status Capability: Not Supported 00:31:46.424 Command & Feature Lockdown Capability: Not Supported 00:31:46.424 Abort Command Limit: 1 00:31:46.424 Async Event Request Limit: 1 00:31:46.424 Number of Firmware Slots: N/A 00:31:46.424 Firmware Slot 1 Read-Only: N/A 00:31:46.424 Firmware Activation Without Reset: N/A 00:31:46.424 Multiple Update Detection Support: N/A 00:31:46.424 Firmware Update Granularity: No Information Provided 00:31:46.424 Per-Namespace SMART Log: No 00:31:46.424 Asymmetric Namespace Access Log Page: Not Supported 00:31:46.424 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:46.424 Command Effects Log Page: Not Supported 00:31:46.424 Get Log Page Extended Data: Supported 00:31:46.424 Telemetry Log Pages: Not Supported 00:31:46.424 Persistent Event Log Pages: Not Supported 00:31:46.424 Supported Log Pages Log Page: May Support 00:31:46.424 Commands Supported & Effects Log Page: Not Supported 00:31:46.424 Feature Identifiers & Effects Log Page:May Support 00:31:46.424 NVMe-MI Commands & Effects Log Page: May Support 00:31:46.424 Data Area 4 for Telemetry Log: Not Supported 00:31:46.424 Error Log Page Entries Supported: 1 00:31:46.424 Keep Alive: Not Supported 00:31:46.424 00:31:46.424 NVM Command Set Attributes 00:31:46.424 ========================== 00:31:46.424 Submission Queue Entry Size 00:31:46.424 Max: 1 00:31:46.424 Min: 1 00:31:46.424 Completion Queue Entry Size 00:31:46.424 Max: 1 00:31:46.424 Min: 1 00:31:46.424 Number of Namespaces: 0 00:31:46.424 Compare Command: Not Supported 00:31:46.424 Write Uncorrectable Command: Not Supported 00:31:46.424 Dataset Management Command: Not Supported 00:31:46.424 Write Zeroes Command: Not Supported 00:31:46.424 Set Features Save Field: Not Supported 00:31:46.424 Reservations: Not Supported 00:31:46.424 Timestamp: Not Supported 00:31:46.424 Copy: Not Supported 00:31:46.424 Volatile Write Cache: Not Present 00:31:46.424 Atomic Write Unit (Normal): 1 00:31:46.424 Atomic Write Unit (PFail): 1 00:31:46.424 Atomic Compare & Write Unit: 1 00:31:46.424 Fused Compare & Write: Not Supported 00:31:46.424 Scatter-Gather List 00:31:46.424 SGL Command Set: Supported 00:31:46.424 SGL Keyed: Supported 00:31:46.424 SGL Bit Bucket Descriptor: Not Supported 00:31:46.424 SGL Metadata Pointer: Not Supported 00:31:46.424 Oversized SGL: Not Supported 00:31:46.424 SGL Metadata Address: Not Supported 00:31:46.424 SGL Offset: Supported 00:31:46.424 Transport SGL Data Block: Not Supported 00:31:46.424 Replay Protected Memory Block: Not Supported 00:31:46.424 00:31:46.424 Firmware Slot Information 00:31:46.424 ========================= 00:31:46.424 Active slot: 0 00:31:46.424 00:31:46.424 00:31:46.424 Error Log 00:31:46.424 ========= 00:31:46.424 00:31:46.425 Active Namespaces 00:31:46.425 ================= 00:31:46.425 Discovery Log Page 00:31:46.425 ================== 00:31:46.425 Generation Counter: 2 00:31:46.425 Number of Records: 2 00:31:46.425 Record Format: 0 00:31:46.425 00:31:46.425 Discovery Log Entry 0 00:31:46.425 ---------------------- 00:31:46.425 Transport Type: 1 (RDMA) 00:31:46.425 Address Family: 1 (IPv4) 00:31:46.425 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:46.425 Entry Flags: 00:31:46.425 Duplicate Returned Information: 0 00:31:46.425 Explicit Persistent Connection Support for Discovery: 0 00:31:46.425 Transport Requirements: 00:31:46.425 Secure Channel: Not Specified 00:31:46.425 Port ID: 1 (0x0001) 00:31:46.425 Controller ID: 65535 (0xffff) 00:31:46.425 Admin Max SQ Size: 32 00:31:46.425 Transport Service Identifier: 4420 00:31:46.425 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:46.425 Transport Address: 192.168.100.8 00:31:46.425 Transport Specific Address Subtype - RDMA 00:31:46.425 RDMA QP Service Type: 1 (Reliable Connected) 00:31:46.425 RDMA Provider Type: 1 (No provider specified) 00:31:46.425 RDMA CM Service: 1 (RDMA_CM) 00:31:46.425 Discovery Log Entry 1 00:31:46.425 ---------------------- 00:31:46.425 Transport Type: 1 (RDMA) 00:31:46.425 Address Family: 1 (IPv4) 00:31:46.425 Subsystem Type: 2 (NVM Subsystem) 00:31:46.425 Entry Flags: 00:31:46.425 Duplicate Returned Information: 0 00:31:46.425 Explicit Persistent Connection Support for Discovery: 0 00:31:46.425 Transport Requirements: 00:31:46.425 Secure Channel: Not Specified 00:31:46.425 Port ID: 1 (0x0001) 00:31:46.425 Controller ID: 65535 (0xffff) 00:31:46.425 Admin Max SQ Size: 32 00:31:46.425 Transport Service Identifier: 4420 00:31:46.425 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:46.425 Transport Address: 192.168.100.8 00:31:46.425 Transport Specific Address Subtype - RDMA 00:31:46.425 RDMA QP Service Type: 1 (Reliable Connected) 00:31:46.425 RDMA Provider Type: 1 (No provider specified) 00:31:46.425 RDMA CM Service: 1 (RDMA_CM) 00:31:46.425 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.684 get_feature(0x01) failed 00:31:46.684 get_feature(0x02) failed 00:31:46.684 get_feature(0x04) failed 00:31:46.684 ===================================================== 00:31:46.684 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.684 ===================================================== 00:31:46.684 Controller Capabilities/Features 00:31:46.684 ================================ 00:31:46.684 Vendor ID: 0000 00:31:46.684 Subsystem Vendor ID: 0000 00:31:46.684 Serial Number: 589060afb9f63d595bca 00:31:46.684 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:46.684 Firmware Version: 6.8.9-20 00:31:46.684 Recommended Arb Burst: 6 00:31:46.684 IEEE OUI Identifier: 00 00 00 00:31:46.684 Multi-path I/O 00:31:46.684 May have multiple subsystem ports: Yes 00:31:46.684 May have multiple controllers: Yes 00:31:46.684 Associated with SR-IOV VF: No 00:31:46.684 Max Data Transfer Size: 1048576 00:31:46.684 Max Number of Namespaces: 1024 00:31:46.684 Max Number of I/O Queues: 128 00:31:46.684 NVMe Specification Version (VS): 1.3 00:31:46.684 NVMe Specification Version (Identify): 1.3 00:31:46.684 Maximum Queue Entries: 128 00:31:46.684 Contiguous Queues Required: No 00:31:46.684 Arbitration Mechanisms Supported 00:31:46.684 Weighted Round Robin: Not Supported 00:31:46.684 Vendor Specific: Not Supported 00:31:46.684 Reset Timeout: 7500 ms 00:31:46.684 Doorbell Stride: 4 bytes 00:31:46.684 NVM Subsystem Reset: Not Supported 00:31:46.684 Command Sets Supported 00:31:46.684 NVM Command Set: Supported 00:31:46.684 Boot Partition: Not Supported 00:31:46.684 Memory Page Size Minimum: 4096 bytes 00:31:46.684 Memory Page Size Maximum: 4096 bytes 00:31:46.684 Persistent Memory Region: Not Supported 00:31:46.684 Optional Asynchronous Events Supported 00:31:46.684 Namespace Attribute Notices: Supported 00:31:46.684 Firmware Activation Notices: Not Supported 00:31:46.684 ANA Change Notices: Supported 00:31:46.684 PLE Aggregate Log Change Notices: Not Supported 00:31:46.684 LBA Status Info Alert Notices: Not Supported 00:31:46.684 EGE Aggregate Log Change Notices: Not Supported 00:31:46.684 Normal NVM Subsystem Shutdown event: Not Supported 00:31:46.684 Zone Descriptor Change Notices: Not Supported 00:31:46.684 Discovery Log Change Notices: Not Supported 00:31:46.684 Controller Attributes 00:31:46.684 128-bit Host Identifier: Supported 00:31:46.684 Non-Operational Permissive Mode: Not Supported 00:31:46.684 NVM Sets: Not Supported 00:31:46.684 Read Recovery Levels: Not Supported 00:31:46.684 Endurance Groups: Not Supported 00:31:46.684 Predictable Latency Mode: Not Supported 00:31:46.684 Traffic Based Keep ALive: Supported 00:31:46.684 Namespace Granularity: Not Supported 00:31:46.684 SQ Associations: Not Supported 00:31:46.684 UUID List: Not Supported 00:31:46.684 Multi-Domain Subsystem: Not Supported 00:31:46.684 Fixed Capacity Management: Not Supported 00:31:46.684 Variable Capacity Management: Not Supported 00:31:46.684 Delete Endurance Group: Not Supported 00:31:46.684 Delete NVM Set: Not Supported 00:31:46.684 Extended LBA Formats Supported: Not Supported 00:31:46.684 Flexible Data Placement Supported: Not Supported 00:31:46.684 00:31:46.684 Controller Memory Buffer Support 00:31:46.684 ================================ 00:31:46.684 Supported: No 00:31:46.684 00:31:46.684 Persistent Memory Region Support 00:31:46.684 ================================ 00:31:46.684 Supported: No 00:31:46.684 00:31:46.684 Admin Command Set Attributes 00:31:46.684 ============================ 00:31:46.684 Security Send/Receive: Not Supported 00:31:46.684 Format NVM: Not Supported 00:31:46.684 Firmware Activate/Download: Not Supported 00:31:46.684 Namespace Management: Not Supported 00:31:46.684 Device Self-Test: Not Supported 00:31:46.684 Directives: Not Supported 00:31:46.684 NVMe-MI: Not Supported 00:31:46.684 Virtualization Management: Not Supported 00:31:46.684 Doorbell Buffer Config: Not Supported 00:31:46.684 Get LBA Status Capability: Not Supported 00:31:46.684 Command & Feature Lockdown Capability: Not Supported 00:31:46.684 Abort Command Limit: 4 00:31:46.684 Async Event Request Limit: 4 00:31:46.684 Number of Firmware Slots: N/A 00:31:46.684 Firmware Slot 1 Read-Only: N/A 00:31:46.684 Firmware Activation Without Reset: N/A 00:31:46.684 Multiple Update Detection Support: N/A 00:31:46.684 Firmware Update Granularity: No Information Provided 00:31:46.684 Per-Namespace SMART Log: Yes 00:31:46.684 Asymmetric Namespace Access Log Page: Supported 00:31:46.684 ANA Transition Time : 10 sec 00:31:46.684 00:31:46.684 Asymmetric Namespace Access Capabilities 00:31:46.684 ANA Optimized State : Supported 00:31:46.684 ANA Non-Optimized State : Supported 00:31:46.684 ANA Inaccessible State : Supported 00:31:46.684 ANA Persistent Loss State : Supported 00:31:46.684 ANA Change State : Supported 00:31:46.684 ANAGRPID is not changed : No 00:31:46.684 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:46.684 00:31:46.684 ANA Group Identifier Maximum : 128 00:31:46.684 Number of ANA Group Identifiers : 128 00:31:46.684 Max Number of Allowed Namespaces : 1024 00:31:46.684 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:46.684 Command Effects Log Page: Supported 00:31:46.684 Get Log Page Extended Data: Supported 00:31:46.684 Telemetry Log Pages: Not Supported 00:31:46.684 Persistent Event Log Pages: Not Supported 00:31:46.684 Supported Log Pages Log Page: May Support 00:31:46.684 Commands Supported & Effects Log Page: Not Supported 00:31:46.684 Feature Identifiers & Effects Log Page:May Support 00:31:46.684 NVMe-MI Commands & Effects Log Page: May Support 00:31:46.684 Data Area 4 for Telemetry Log: Not Supported 00:31:46.684 Error Log Page Entries Supported: 128 00:31:46.684 Keep Alive: Supported 00:31:46.684 Keep Alive Granularity: 1000 ms 00:31:46.684 00:31:46.685 NVM Command Set Attributes 00:31:46.685 ========================== 00:31:46.685 Submission Queue Entry Size 00:31:46.685 Max: 64 00:31:46.685 Min: 64 00:31:46.685 Completion Queue Entry Size 00:31:46.685 Max: 16 00:31:46.685 Min: 16 00:31:46.685 Number of Namespaces: 1024 00:31:46.685 Compare Command: Not Supported 00:31:46.685 Write Uncorrectable Command: Not Supported 00:31:46.685 Dataset Management Command: Supported 00:31:46.685 Write Zeroes Command: Supported 00:31:46.685 Set Features Save Field: Not Supported 00:31:46.685 Reservations: Not Supported 00:31:46.685 Timestamp: Not Supported 00:31:46.685 Copy: Not Supported 00:31:46.685 Volatile Write Cache: Present 00:31:46.685 Atomic Write Unit (Normal): 1 00:31:46.685 Atomic Write Unit (PFail): 1 00:31:46.685 Atomic Compare & Write Unit: 1 00:31:46.685 Fused Compare & Write: Not Supported 00:31:46.685 Scatter-Gather List 00:31:46.685 SGL Command Set: Supported 00:31:46.685 SGL Keyed: Supported 00:31:46.685 SGL Bit Bucket Descriptor: Not Supported 00:31:46.685 SGL Metadata Pointer: Not Supported 00:31:46.685 Oversized SGL: Not Supported 00:31:46.685 SGL Metadata Address: Not Supported 00:31:46.685 SGL Offset: Supported 00:31:46.685 Transport SGL Data Block: Not Supported 00:31:46.685 Replay Protected Memory Block: Not Supported 00:31:46.685 00:31:46.685 Firmware Slot Information 00:31:46.685 ========================= 00:31:46.685 Active slot: 0 00:31:46.685 00:31:46.685 Asymmetric Namespace Access 00:31:46.685 =========================== 00:31:46.685 Change Count : 0 00:31:46.685 Number of ANA Group Descriptors : 1 00:31:46.685 ANA Group Descriptor : 0 00:31:46.685 ANA Group ID : 1 00:31:46.685 Number of NSID Values : 1 00:31:46.685 Change Count : 0 00:31:46.685 ANA State : 1 00:31:46.685 Namespace Identifier : 1 00:31:46.685 00:31:46.685 Commands Supported and Effects 00:31:46.685 ============================== 00:31:46.685 Admin Commands 00:31:46.685 -------------- 00:31:46.685 Get Log Page (02h): Supported 00:31:46.685 Identify (06h): Supported 00:31:46.685 Abort (08h): Supported 00:31:46.685 Set Features (09h): Supported 00:31:46.685 Get Features (0Ah): Supported 00:31:46.685 Asynchronous Event Request (0Ch): Supported 00:31:46.685 Keep Alive (18h): Supported 00:31:46.685 I/O Commands 00:31:46.685 ------------ 00:31:46.685 Flush (00h): Supported 00:31:46.685 Write (01h): Supported LBA-Change 00:31:46.685 Read (02h): Supported 00:31:46.685 Write Zeroes (08h): Supported LBA-Change 00:31:46.685 Dataset Management (09h): Supported 00:31:46.685 00:31:46.685 Error Log 00:31:46.685 ========= 00:31:46.685 Entry: 0 00:31:46.685 Error Count: 0x3 00:31:46.685 Submission Queue Id: 0x0 00:31:46.685 Command Id: 0x5 00:31:46.685 Phase Bit: 0 00:31:46.685 Status Code: 0x2 00:31:46.685 Status Code Type: 0x0 00:31:46.685 Do Not Retry: 1 00:31:46.685 Error Location: 0x28 00:31:46.685 LBA: 0x0 00:31:46.685 Namespace: 0x0 00:31:46.685 Vendor Log Page: 0x0 00:31:46.685 ----------- 00:31:46.685 Entry: 1 00:31:46.685 Error Count: 0x2 00:31:46.685 Submission Queue Id: 0x0 00:31:46.685 Command Id: 0x5 00:31:46.685 Phase Bit: 0 00:31:46.685 Status Code: 0x2 00:31:46.685 Status Code Type: 0x0 00:31:46.685 Do Not Retry: 1 00:31:46.685 Error Location: 0x28 00:31:46.685 LBA: 0x0 00:31:46.685 Namespace: 0x0 00:31:46.685 Vendor Log Page: 0x0 00:31:46.685 ----------- 00:31:46.685 Entry: 2 00:31:46.685 Error Count: 0x1 00:31:46.685 Submission Queue Id: 0x0 00:31:46.685 Command Id: 0x0 00:31:46.685 Phase Bit: 0 00:31:46.685 Status Code: 0x2 00:31:46.685 Status Code Type: 0x0 00:31:46.685 Do Not Retry: 1 00:31:46.685 Error Location: 0x28 00:31:46.685 LBA: 0x0 00:31:46.685 Namespace: 0x0 00:31:46.685 Vendor Log Page: 0x0 00:31:46.685 00:31:46.685 Number of Queues 00:31:46.685 ================ 00:31:46.685 Number of I/O Submission Queues: 128 00:31:46.685 Number of I/O Completion Queues: 128 00:31:46.685 00:31:46.685 ZNS Specific Controller Data 00:31:46.685 ============================ 00:31:46.685 Zone Append Size Limit: 0 00:31:46.685 00:31:46.685 00:31:46.685 Active Namespaces 00:31:46.685 ================= 00:31:46.685 get_feature(0x05) failed 00:31:46.685 Namespace ID:1 00:31:46.685 Command Set Identifier: NVM (00h) 00:31:46.685 Deallocate: Supported 00:31:46.685 Deallocated/Unwritten Error: Not Supported 00:31:46.685 Deallocated Read Value: Unknown 00:31:46.685 Deallocate in Write Zeroes: Not Supported 00:31:46.685 Deallocated Guard Field: 0xFFFF 00:31:46.685 Flush: Supported 00:31:46.685 Reservation: Not Supported 00:31:46.685 Namespace Sharing Capabilities: Multiple Controllers 00:31:46.685 Size (in LBAs): 3907029168 (1863GiB) 00:31:46.685 Capacity (in LBAs): 3907029168 (1863GiB) 00:31:46.685 Utilization (in LBAs): 3907029168 (1863GiB) 00:31:46.685 UUID: 619ffd22-ddad-42fb-ada3-d4e0d95dedde 00:31:46.685 Thin Provisioning: Not Supported 00:31:46.685 Per-NS Atomic Units: Yes 00:31:46.685 Atomic Boundary Size (Normal): 0 00:31:46.685 Atomic Boundary Size (PFail): 0 00:31:46.685 Atomic Boundary Offset: 0 00:31:46.685 NGUID/EUI64 Never Reused: No 00:31:46.685 ANA group ID: 1 00:31:46.685 Namespace Write Protected: No 00:31:46.685 Number of LBA Formats: 1 00:31:46.685 Current LBA Format: LBA Format #00 00:31:46.685 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:46.685 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:46.685 rmmod nvme_rdma 00:31:46.685 rmmod nvme_fabrics 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:31:46.685 15:50:24 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:49.969 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:49.969 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.871 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.871 00:31:51.871 real 0m17.208s 00:31:51.871 user 0m4.494s 00:31:51.871 sys 0m9.922s 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:51.871 ************************************ 00:31:51.871 END TEST nvmf_identify_kernel_target 00:31:51.871 ************************************ 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.871 ************************************ 00:31:51.871 START TEST nvmf_auth_host 00:31:51.871 ************************************ 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:51.871 * Looking for test storage... 00:31:51.871 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:51.871 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:51.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.872 --rc genhtml_branch_coverage=1 00:31:51.872 --rc genhtml_function_coverage=1 00:31:51.872 --rc genhtml_legend=1 00:31:51.872 --rc geninfo_all_blocks=1 00:31:51.872 --rc geninfo_unexecuted_blocks=1 00:31:51.872 00:31:51.872 ' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:51.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.872 --rc genhtml_branch_coverage=1 00:31:51.872 --rc genhtml_function_coverage=1 00:31:51.872 --rc genhtml_legend=1 00:31:51.872 --rc geninfo_all_blocks=1 00:31:51.872 --rc geninfo_unexecuted_blocks=1 00:31:51.872 00:31:51.872 ' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:51.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.872 --rc genhtml_branch_coverage=1 00:31:51.872 --rc genhtml_function_coverage=1 00:31:51.872 --rc genhtml_legend=1 00:31:51.872 --rc geninfo_all_blocks=1 00:31:51.872 --rc geninfo_unexecuted_blocks=1 00:31:51.872 00:31:51.872 ' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:51.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.872 --rc genhtml_branch_coverage=1 00:31:51.872 --rc genhtml_function_coverage=1 00:31:51.872 --rc genhtml_legend=1 00:31:51.872 --rc geninfo_all_blocks=1 00:31:51.872 --rc geninfo_unexecuted_blocks=1 00:31:51.872 00:31:51.872 ' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:51.872 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:51.872 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.873 15:50:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:59.983 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:59.983 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:59.983 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:59.983 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.983 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:59.984 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:59.984 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:59.984 altname enp217s0f0np0 00:31:59.984 altname ens818f0np0 00:31:59.984 inet 192.168.100.8/24 scope global mlx_0_0 00:31:59.984 valid_lft forever preferred_lft forever 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:59.984 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:59.984 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:59.984 altname enp217s0f1np1 00:31:59.984 altname ens818f1np1 00:31:59.984 inet 192.168.100.9/24 scope global mlx_0_1 00:31:59.984 valid_lft forever preferred_lft forever 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:59.984 192.168.100.9' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:59.984 192.168.100.9' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:59.984 192.168.100.9' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2458542 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2458542 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2458542 ']' 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.984 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cfe15ed0b8ae9af612010da855ade64 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eBO 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cfe15ed0b8ae9af612010da855ade64 0 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cfe15ed0b8ae9af612010da855ade64 0 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cfe15ed0b8ae9af612010da855ade64 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eBO 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eBO 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eBO 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=348faa84b8d34924567a6a0a8069d08b8db97d86806001fbb924719da9f6d30f 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ObL 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 348faa84b8d34924567a6a0a8069d08b8db97d86806001fbb924719da9f6d30f 3 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 348faa84b8d34924567a6a0a8069d08b8db97d86806001fbb924719da9f6d30f 3 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=348faa84b8d34924567a6a0a8069d08b8db97d86806001fbb924719da9f6d30f 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:59.985 15:50:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ObL 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ObL 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ObL 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=adb0c2fe03de8c3b7843962afbfea683df7b37977885b26a 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AW7 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key adb0c2fe03de8c3b7843962afbfea683df7b37977885b26a 0 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 adb0c2fe03de8c3b7843962afbfea683df7b37977885b26a 0 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=adb0c2fe03de8c3b7843962afbfea683df7b37977885b26a 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AW7 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AW7 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AW7 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d76ab6d67fdd0d811f20f95afc02231c401a2ca24348c261 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AtN 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d76ab6d67fdd0d811f20f95afc02231c401a2ca24348c261 2 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d76ab6d67fdd0d811f20f95afc02231c401a2ca24348c261 2 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d76ab6d67fdd0d811f20f95afc02231c401a2ca24348c261 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AtN 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AtN 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.AtN 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f7bd12b7b80bf913b6237a9de8a6da55 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mck 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f7bd12b7b80bf913b6237a9de8a6da55 1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f7bd12b7b80bf913b6237a9de8a6da55 1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f7bd12b7b80bf913b6237a9de8a6da55 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mck 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mck 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mck 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:59.985 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=548f5cef60cbbffc15580342e4d5ced2 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FfS 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 548f5cef60cbbffc15580342e4d5ced2 1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 548f5cef60cbbffc15580342e4d5ced2 1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=548f5cef60cbbffc15580342e4d5ced2 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FfS 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FfS 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FfS 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb7b23dd73068cf21645406c721c4ebf471de6553e091215 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Lxl 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb7b23dd73068cf21645406c721c4ebf471de6553e091215 2 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb7b23dd73068cf21645406c721c4ebf471de6553e091215 2 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb7b23dd73068cf21645406c721c4ebf471de6553e091215 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Lxl 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Lxl 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Lxl 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=08a0076d6d513f7dcfd93c2c17e0192b 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Q2S 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 08a0076d6d513f7dcfd93c2c17e0192b 0 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 08a0076d6d513f7dcfd93c2c17e0192b 0 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=08a0076d6d513f7dcfd93c2c17e0192b 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Q2S 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Q2S 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Q2S 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3a4fdf9658a8d6e9228218611fa8a307b8c50ccd4ba04f91ef9a2c8ae4511e48 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JbH 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3a4fdf9658a8d6e9228218611fa8a307b8c50ccd4ba04f91ef9a2c8ae4511e48 3 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3a4fdf9658a8d6e9228218611fa8a307b8c50ccd4ba04f91ef9a2c8ae4511e48 3 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3a4fdf9658a8d6e9228218611fa8a307b8c50ccd4ba04f91ef9a2c8ae4511e48 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JbH 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JbH 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JbH 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2458542 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2458542 ']' 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eBO 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ObL ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ObL 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AW7 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.AtN ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AtN 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mck 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.986 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FfS ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FfS 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Lxl 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Q2S ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Q2S 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JbH 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:59.987 15:50:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:32:03.264 Waiting for block devices as requested 00:32:03.264 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:03.264 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:03.264 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:03.264 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:03.264 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:03.264 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:03.522 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:03.522 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:03.522 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:03.780 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:03.780 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:03.780 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:04.037 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:04.037 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:04.037 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:04.295 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:04.295 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:05.229 No valid GPT data, bailing 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:32:05.229 00:32:05.229 Discovery Log Number of Records 2, Generation counter 2 00:32:05.229 =====Discovery Log Entry 0====== 00:32:05.229 trtype: rdma 00:32:05.229 adrfam: ipv4 00:32:05.229 subtype: current discovery subsystem 00:32:05.229 treq: not specified, sq flow control disable supported 00:32:05.229 portid: 1 00:32:05.229 trsvcid: 4420 00:32:05.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:05.229 traddr: 192.168.100.8 00:32:05.229 eflags: none 00:32:05.229 rdma_prtype: not specified 00:32:05.229 rdma_qptype: connected 00:32:05.229 rdma_cms: rdma-cm 00:32:05.229 rdma_pkey: 0x0000 00:32:05.229 =====Discovery Log Entry 1====== 00:32:05.229 trtype: rdma 00:32:05.229 adrfam: ipv4 00:32:05.229 subtype: nvme subsystem 00:32:05.229 treq: not specified, sq flow control disable supported 00:32:05.229 portid: 1 00:32:05.229 trsvcid: 4420 00:32:05.229 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:05.229 traddr: 192.168.100.8 00:32:05.229 eflags: none 00:32:05.229 rdma_prtype: not specified 00:32:05.229 rdma_qptype: connected 00:32:05.229 rdma_cms: rdma-cm 00:32:05.229 rdma_pkey: 0x0000 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.229 15:50:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.487 nvme0n1 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.487 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.745 nvme0n1 00:32:05.745 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.745 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.745 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.745 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.745 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.746 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.003 nvme0n1 00:32:06.003 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.003 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.004 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.261 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.261 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.261 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.262 nvme0n1 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.262 15:50:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.262 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.262 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.262 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.262 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.262 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.520 nvme0n1 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.520 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.778 nvme0n1 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.778 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.036 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.295 nvme0n1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.295 15:50:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.553 nvme0n1 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.553 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.810 nvme0n1 00:32:07.810 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.810 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.810 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.810 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.810 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.811 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.069 nvme0n1 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.069 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.326 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.327 15:50:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.327 nvme0n1 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.327 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.585 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.843 nvme0n1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.843 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.100 nvme0n1 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.100 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.358 15:50:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.616 nvme0n1 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.616 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.874 nvme0n1 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:09.874 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:09.875 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.875 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.875 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.440 nvme0n1 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.440 15:50:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.440 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.698 nvme0n1 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.698 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.956 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.214 nvme0n1 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.214 15:50:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.780 nvme0n1 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.780 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.350 nvme0n1 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.350 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.351 15:50:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.608 nvme0n1 00:32:12.608 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.866 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.867 15:50:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.432 nvme0n1 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.432 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.433 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.998 nvme0n1 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.998 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.256 15:50:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.821 nvme0n1 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.821 15:50:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.386 nvme0n1 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.387 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.645 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.211 nvme0n1 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.211 15:50:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.531 nvme0n1 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.531 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.532 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.790 nvme0n1 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.790 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.048 nvme0n1 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.048 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.306 nvme0n1 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.306 15:50:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:17.306 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:17.307 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:17.307 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.307 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.307 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.564 nvme0n1 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.564 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.565 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.822 nvme0n1 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.822 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.080 nvme0n1 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.080 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.338 15:50:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.595 nvme0n1 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.595 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.596 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.853 nvme0n1 00:32:18.853 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.854 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.113 nvme0n1 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.113 15:50:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.371 nvme0n1 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.371 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.629 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.888 nvme0n1 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.888 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.147 nvme0n1 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.147 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:20.405 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.406 15:50:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.664 nvme0n1 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.664 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.923 nvme0n1 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.923 15:50:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.489 nvme0n1 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.489 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.490 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 nvme0n1 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.056 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.057 15:50:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 nvme0n1 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 nvme0n1 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.880 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.138 15:51:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.396 nvme0n1 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.396 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.654 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.221 nvme0n1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.221 15:51:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.787 nvme0n1 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.787 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:25.045 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:25.046 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.046 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.046 15:51:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.612 nvme0n1 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.612 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.178 nvme0n1 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.178 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.436 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.437 15:51:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.003 nvme0n1 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.003 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.262 nvme0n1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.262 15:51:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.520 nvme0n1 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.521 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.779 nvme0n1 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.779 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.780 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.038 nvme0n1 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.038 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.297 15:51:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.297 nvme0n1 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.297 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.556 nvme0n1 00:32:28.556 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.814 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.815 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.073 nvme0n1 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.073 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.074 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.332 nvme0n1 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.332 15:51:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.332 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.333 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.591 nvme0n1 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.591 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:29.849 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.849 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.849 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:29.849 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.850 nvme0n1 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.850 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.108 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.109 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.368 nvme0n1 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.368 15:51:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.368 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.627 nvme0n1 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.627 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:30.885 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.886 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 nvme0n1 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.144 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.145 15:51:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.401 nvme0n1 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.401 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.658 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.658 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.658 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:31.658 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.659 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.917 nvme0n1 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.917 15:51:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.482 nvme0n1 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:32.482 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.483 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.742 nvme0n1 00:32:32.742 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.000 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.001 15:51:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.259 nvme0n1 00:32:33.259 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.259 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.259 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.259 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.259 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.515 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.515 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.515 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.515 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.515 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.516 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.772 nvme0n1 00:32:33.772 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.772 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.772 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.772 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.772 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.029 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.030 15:51:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.287 nvme0n1 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.287 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZTE1ZWQwYjhhZTlhZjYxMjAxMGRhODU1YWRlNjQCz4lS: 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ4ZmFhODRiOGQzNDkyNDU2N2E2YTBhODA2OWQwOGI4ZGI5N2Q4NjgwNjAwMWZiYjkyNDcxOWRhOWY2ZDMwZkEM6RY=: 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.545 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.110 nvme0n1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.110 15:51:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.676 nvme0n1 00:32:35.676 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.676 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.676 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.676 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.676 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.934 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.935 15:51:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.501 nvme0n1 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWI3YjIzZGQ3MzA2OGNmMjE2NDU0MDZjNzIxYzRlYmY0NzFkZTY1NTNlMDkxMjE1hUegMA==: 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDhhMDA3NmQ2ZDUxM2Y3ZGNmZDkzYzJjMTdlMDE5MmL/p9Da: 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.501 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.502 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.434 nvme0n1 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2E0ZmRmOTY1OGE4ZDZlOTIyODIxODYxMWZhOGEzMDdiOGM1MGNjZDRiYTA0ZjkxZWY5YTJjOGFlNDUxMWU0OOgEvjY=: 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.434 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.435 15:51:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.016 nvme0n1 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.016 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.016 request: 00:32:38.016 { 00:32:38.016 "name": "nvme0", 00:32:38.016 "trtype": "rdma", 00:32:38.016 "traddr": "192.168.100.8", 00:32:38.016 "adrfam": "ipv4", 00:32:38.016 "trsvcid": "4420", 00:32:38.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.016 "prchk_reftag": false, 00:32:38.016 "prchk_guard": false, 00:32:38.016 "hdgst": false, 00:32:38.016 "ddgst": false, 00:32:38.016 "allow_unrecognized_csi": false, 00:32:38.016 "method": "bdev_nvme_attach_controller", 00:32:38.017 "req_id": 1 00:32:38.017 } 00:32:38.017 Got JSON-RPC error response 00:32:38.017 response: 00:32:38.017 { 00:32:38.017 "code": -5, 00:32:38.017 "message": "Input/output error" 00:32:38.017 } 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.017 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.275 request: 00:32:38.276 { 00:32:38.276 "name": "nvme0", 00:32:38.276 "trtype": "rdma", 00:32:38.276 "traddr": "192.168.100.8", 00:32:38.276 "adrfam": "ipv4", 00:32:38.276 "trsvcid": "4420", 00:32:38.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.276 "prchk_reftag": false, 00:32:38.276 "prchk_guard": false, 00:32:38.276 "hdgst": false, 00:32:38.276 "ddgst": false, 00:32:38.276 "dhchap_key": "key2", 00:32:38.276 "allow_unrecognized_csi": false, 00:32:38.276 "method": "bdev_nvme_attach_controller", 00:32:38.276 "req_id": 1 00:32:38.276 } 00:32:38.276 Got JSON-RPC error response 00:32:38.276 response: 00:32:38.276 { 00:32:38.276 "code": -5, 00:32:38.276 "message": "Input/output error" 00:32:38.276 } 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.276 15:51:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.276 request: 00:32:38.276 { 00:32:38.276 "name": "nvme0", 00:32:38.276 "trtype": "rdma", 00:32:38.276 "traddr": "192.168.100.8", 00:32:38.276 "adrfam": "ipv4", 00:32:38.276 "trsvcid": "4420", 00:32:38.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.276 "prchk_reftag": false, 00:32:38.276 "prchk_guard": false, 00:32:38.276 "hdgst": false, 00:32:38.276 "ddgst": false, 00:32:38.276 "dhchap_key": "key1", 00:32:38.276 "dhchap_ctrlr_key": "ckey2", 00:32:38.276 "allow_unrecognized_csi": false, 00:32:38.276 "method": "bdev_nvme_attach_controller", 00:32:38.276 "req_id": 1 00:32:38.276 } 00:32:38.276 Got JSON-RPC error response 00:32:38.276 response: 00:32:38.276 { 00:32:38.276 "code": -5, 00:32:38.276 "message": "Input/output error" 00:32:38.276 } 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.546 nvme0n1 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:38.546 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:38.547 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:38.547 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.547 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.547 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.818 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.819 request: 00:32:38.819 { 00:32:38.819 "name": "nvme0", 00:32:38.819 "dhchap_key": "key1", 00:32:38.819 "dhchap_ctrlr_key": "ckey2", 00:32:38.819 "method": "bdev_nvme_set_keys", 00:32:38.819 "req_id": 1 00:32:38.819 } 00:32:38.819 Got JSON-RPC error response 00:32:38.819 response: 00:32:38.819 { 00:32:38.819 "code": -13, 00:32:38.819 "message": "Permission denied" 00:32:38.819 } 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:38.819 15:51:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:39.826 15:51:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMGMyZmUwM2RlOGMzYjc4NDM5NjJhZmJmZWE2ODNkZjdiMzc5Nzc4ODViMjZhHqyhXA==: 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: ]] 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDc2YWI2ZDY3ZmRkMGQ4MTFmMjBmOTVhZmMwMjIzMWM0MDFhMmNhMjQzNDhjMjYx1sF1BA==: 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.200 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.200 nvme0n1 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdiZDEyYjdiODBiZjkxM2I2MjM3YTlkZThhNmRhNTWT5YPy: 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: ]] 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ4ZjVjZWY2MGNiYmZmYzE1NTgwMzQyZTRkNWNlZDKzIMYH: 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.201 request: 00:32:41.201 { 00:32:41.201 "name": "nvme0", 00:32:41.201 "dhchap_key": "key2", 00:32:41.201 "dhchap_ctrlr_key": "ckey1", 00:32:41.201 "method": "bdev_nvme_set_keys", 00:32:41.201 "req_id": 1 00:32:41.201 } 00:32:41.201 Got JSON-RPC error response 00:32:41.201 response: 00:32:41.201 { 00:32:41.201 "code": -13, 00:32:41.201 "message": "Permission denied" 00:32:41.201 } 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:41.201 15:51:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:42.573 15:51:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:43.508 15:51:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.508 15:51:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.508 15:51:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.508 15:51:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:43.508 rmmod nvme_rdma 00:32:43.508 rmmod nvme_fabrics 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2458542 ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2458542 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2458542 ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2458542 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2458542 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2458542' 00:32:43.508 killing process with pid 2458542 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2458542 00:32:43.508 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2458542 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:32:43.766 15:51:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:47.045 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:47.045 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:48.947 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:48.947 15:51:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eBO /tmp/spdk.key-null.AW7 /tmp/spdk.key-sha256.mck /tmp/spdk.key-sha384.Lxl /tmp/spdk.key-sha512.JbH /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:32:49.205 15:51:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:52.486 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:52.486 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:52.486 00:32:52.486 real 1m0.733s 00:32:52.486 user 0m53.596s 00:32:52.486 sys 0m15.523s 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.486 ************************************ 00:32:52.486 END TEST nvmf_auth_host 00:32:52.486 ************************************ 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.486 ************************************ 00:32:52.486 START TEST nvmf_bdevperf 00:32:52.486 ************************************ 00:32:52.486 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:52.745 * Looking for test storage... 00:32:52.745 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:52.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.745 --rc genhtml_branch_coverage=1 00:32:52.745 --rc genhtml_function_coverage=1 00:32:52.745 --rc genhtml_legend=1 00:32:52.745 --rc geninfo_all_blocks=1 00:32:52.745 --rc geninfo_unexecuted_blocks=1 00:32:52.745 00:32:52.745 ' 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:52.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.745 --rc genhtml_branch_coverage=1 00:32:52.745 --rc genhtml_function_coverage=1 00:32:52.745 --rc genhtml_legend=1 00:32:52.745 --rc geninfo_all_blocks=1 00:32:52.745 --rc geninfo_unexecuted_blocks=1 00:32:52.745 00:32:52.745 ' 00:32:52.745 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:52.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.745 --rc genhtml_branch_coverage=1 00:32:52.745 --rc genhtml_function_coverage=1 00:32:52.745 --rc genhtml_legend=1 00:32:52.745 --rc geninfo_all_blocks=1 00:32:52.745 --rc geninfo_unexecuted_blocks=1 00:32:52.746 00:32:52.746 ' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:52.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.746 --rc genhtml_branch_coverage=1 00:32:52.746 --rc genhtml_function_coverage=1 00:32:52.746 --rc genhtml_legend=1 00:32:52.746 --rc geninfo_all_blocks=1 00:32:52.746 --rc geninfo_unexecuted_blocks=1 00:32:52.746 00:32:52.746 ' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.746 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.746 15:51:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:59.301 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:59.301 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:59.301 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:59.301 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:59.301 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:59.302 15:51:36 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:59.302 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:59.302 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:59.302 altname enp217s0f0np0 00:32:59.302 altname ens818f0np0 00:32:59.302 inet 192.168.100.8/24 scope global mlx_0_0 00:32:59.302 valid_lft forever preferred_lft forever 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:59.302 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:59.302 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:59.302 altname enp217s0f1np1 00:32:59.302 altname ens818f1np1 00:32:59.302 inet 192.168.100.9/24 scope global mlx_0_1 00:32:59.302 valid_lft forever preferred_lft forever 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:59.302 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:59.560 192.168.100.9' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:59.560 192.168.100.9' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:59.560 192.168.100.9' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:59.560 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2474052 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2474052 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2474052 ']' 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:59.561 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.561 [2024-11-03 15:51:37.196314] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:32:59.561 [2024-11-03 15:51:37.196380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.561 [2024-11-03 15:51:37.275542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:59.561 [2024-11-03 15:51:37.298105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.561 [2024-11-03 15:51:37.298143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.561 [2024-11-03 15:51:37.298152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.561 [2024-11-03 15:51:37.298161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.561 [2024-11-03 15:51:37.298167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.561 [2024-11-03 15:51:37.299752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:59.561 [2024-11-03 15:51:37.302980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:59.561 [2024-11-03 15:51:37.302984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.818 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:59.818 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:32:59.818 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:59.818 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:59.818 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.819 [2024-11-03 15:51:37.468380] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x142d3d0/0x1431880) succeed. 00:32:59.819 [2024-11-03 15:51:37.477352] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x142e970/0x1472f20) succeed. 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.819 Malloc0 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.819 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.076 [2024-11-03 15:51:37.624553] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.076 { 00:33:00.076 "params": { 00:33:00.076 "name": "Nvme$subsystem", 00:33:00.076 "trtype": "$TEST_TRANSPORT", 00:33:00.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.076 "adrfam": "ipv4", 00:33:00.076 "trsvcid": "$NVMF_PORT", 00:33:00.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.076 "hdgst": ${hdgst:-false}, 00:33:00.076 "ddgst": ${ddgst:-false} 00:33:00.076 }, 00:33:00.076 "method": "bdev_nvme_attach_controller" 00:33:00.076 } 00:33:00.076 EOF 00:33:00.076 )") 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:00.076 15:51:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.076 "params": { 00:33:00.076 "name": "Nvme1", 00:33:00.076 "trtype": "rdma", 00:33:00.076 "traddr": "192.168.100.8", 00:33:00.076 "adrfam": "ipv4", 00:33:00.076 "trsvcid": "4420", 00:33:00.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.076 "hdgst": false, 00:33:00.076 "ddgst": false 00:33:00.076 }, 00:33:00.076 "method": "bdev_nvme_attach_controller" 00:33:00.076 }' 00:33:00.076 [2024-11-03 15:51:37.674996] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:00.076 [2024-11-03 15:51:37.675046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474080 ] 00:33:00.076 [2024-11-03 15:51:37.752736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.076 [2024-11-03 15:51:37.774936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.334 Running I/O for 1 seconds... 00:33:01.266 18304.00 IOPS, 71.50 MiB/s 00:33:01.266 Latency(us) 00:33:01.266 [2024-11-03T14:51:39.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:01.266 Verification LBA range: start 0x0 length 0x4000 00:33:01.266 Nvme1n1 : 1.01 18331.43 71.61 0.00 0.00 6945.93 149.91 10800.33 00:33:01.266 [2024-11-03T14:51:39.056Z] =================================================================================================================== 00:33:01.266 [2024-11-03T14:51:39.056Z] Total : 18331.43 71.61 0.00 0.00 6945.93 149.91 10800.33 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2474352 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.523 { 00:33:01.523 "params": { 00:33:01.523 "name": "Nvme$subsystem", 00:33:01.523 "trtype": "$TEST_TRANSPORT", 00:33:01.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.523 "adrfam": "ipv4", 00:33:01.523 "trsvcid": "$NVMF_PORT", 00:33:01.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.523 "hdgst": ${hdgst:-false}, 00:33:01.523 "ddgst": ${ddgst:-false} 00:33:01.523 }, 00:33:01.523 "method": "bdev_nvme_attach_controller" 00:33:01.523 } 00:33:01.523 EOF 00:33:01.523 )") 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:01.523 15:51:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.523 "params": { 00:33:01.523 "name": "Nvme1", 00:33:01.523 "trtype": "rdma", 00:33:01.523 "traddr": "192.168.100.8", 00:33:01.523 "adrfam": "ipv4", 00:33:01.523 "trsvcid": "4420", 00:33:01.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.523 "hdgst": false, 00:33:01.523 "ddgst": false 00:33:01.523 }, 00:33:01.523 "method": "bdev_nvme_attach_controller" 00:33:01.523 }' 00:33:01.523 [2024-11-03 15:51:39.178365] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:01.523 [2024-11-03 15:51:39.178421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474352 ] 00:33:01.523 [2024-11-03 15:51:39.256190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.523 [2024-11-03 15:51:39.275803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.780 Running I/O for 15 seconds... 00:33:04.085 18304.00 IOPS, 71.50 MiB/s [2024-11-03T14:51:42.441Z] 18368.00 IOPS, 71.75 MiB/s [2024-11-03T14:51:42.441Z] 15:51:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2474052 00:33:04.651 15:51:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:05.477 16426.67 IOPS, 64.17 MiB/s [2024-11-03T14:51:43.267Z] [2024-11-03 15:51:43.161741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.477 [2024-11-03 15:51:43.161777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52258 cdw0:0 sqhd:3f50 p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.161790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.477 [2024-11-03 15:51:43.161815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52258 cdw0:0 sqhd:3f50 p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.161826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.477 [2024-11-03 15:51:43.161834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52258 cdw0:0 sqhd:3f50 p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.161844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.477 [2024-11-03 15:51:43.161853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52258 cdw0:0 sqhd:3f50 p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.163919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:05.477 [2024-11-03 15:51:43.163938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.477 [2024-11-03 15:51:43.163965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.163981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.477 [2024-11-03 15:51:43.164434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.477 [2024-11-03 15:51:43.164444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.164985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.164995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.478 [2024-11-03 15:51:43.165953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.478 [2024-11-03 15:51:43.165962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.165997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.479 [2024-11-03 15:51:43.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.166956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.166974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180400 00:33:05.479 [2024-11-03 15:51:43.167411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.479 [2024-11-03 15:51:43.167441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.167956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.167965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180400 00:33:05.480 [2024-11-03 15:51:43.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.480 [2024-11-03 15:51:43.168876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180400 00:33:05.481 [2024-11-03 15:51:43.168885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.481 [2024-11-03 15:51:43.168915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180400 00:33:05.481 [2024-11-03 15:51:43.168924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.481 [2024-11-03 15:51:43.168954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180400 00:33:05.481 [2024-11-03 15:51:43.168964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.481 [2024-11-03 15:51:43.169001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180400 00:33:05.481 [2024-11-03 15:51:43.169010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52258 cdw0:ca2e8000 sqhd:f6aa p:1 m:0 dnr:0 00:33:05.481 [2024-11-03 15:51:43.183085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.481 [2024-11-03 15:51:43.183101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.481 [2024-11-03 15:51:43.183110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1576 len:8 PRP1 0x0 PRP2 0x0 00:33:05.481 [2024-11-03 15:51:43.183120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.481 [2024-11-03 15:51:43.183214] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:33:05.481 [2024-11-03 15:51:43.183240] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:33:05.481 [2024-11-03 15:51:43.186829] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.481 [2024-11-03 15:51:43.190351] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:05.481 [2024-11-03 15:51:43.190378] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:05.481 [2024-11-03 15:51:43.190389] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:33:06.673 12320.00 IOPS, 48.12 MiB/s [2024-11-03T14:51:44.463Z] [2024-11-03 15:51:44.194432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:06.673 [2024-11-03 15:51:44.194491] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.673 [2024-11-03 15:51:44.194729] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.673 [2024-11-03 15:51:44.194740] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.673 [2024-11-03 15:51:44.194750] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:06.673 [2024-11-03 15:51:44.197347] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.673 [2024-11-03 15:51:44.200123] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.673 [2024-11-03 15:51:44.203485] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:06.673 [2024-11-03 15:51:44.203510] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:06.673 [2024-11-03 15:51:44.203521] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:33:07.498 9856.00 IOPS, 38.50 MiB/s [2024-11-03T14:51:45.288Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2474052 Killed "${NVMF_APP[@]}" "$@" 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2475415 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2475415 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2475415 ']' 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:07.498 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 [2024-11-03 15:51:45.199060] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:07.498 [2024-11-03 15:51:45.199113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.498 [2024-11-03 15:51:45.207417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:07.498 [2024-11-03 15:51:45.207444] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.498 [2024-11-03 15:51:45.207635] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.498 [2024-11-03 15:51:45.207646] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.498 [2024-11-03 15:51:45.207656] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:07.498 [2024-11-03 15:51:45.208779] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:33:07.498 [2024-11-03 15:51:45.210352] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.499 [2024-11-03 15:51:45.221661] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.499 [2024-11-03 15:51:45.224449] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:07.499 [2024-11-03 15:51:45.224469] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:07.499 [2024-11-03 15:51:45.224481] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:33:07.499 [2024-11-03 15:51:45.278051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:07.757 [2024-11-03 15:51:45.299912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.757 [2024-11-03 15:51:45.299945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.757 [2024-11-03 15:51:45.299954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.757 [2024-11-03 15:51:45.299962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.757 [2024-11-03 15:51:45.299973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.757 [2024-11-03 15:51:45.301343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.757 [2024-11-03 15:51:45.301367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.757 [2024-11-03 15:51:45.301369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.757 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:07.757 [2024-11-03 15:51:45.461968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6333d0/0x637880) succeed. 00:33:07.757 8213.33 IOPS, 32.08 MiB/s [2024-11-03T14:51:45.547Z] [2024-11-03 15:51:45.470857] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x634970/0x678f20) succeed. 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.016 Malloc0 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.016 [2024-11-03 15:51:45.619477] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.016 15:51:45 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2474352 00:33:08.582 [2024-11-03 15:51:46.228537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:08.582 [2024-11-03 15:51:46.228566] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.582 [2024-11-03 15:51:46.228739] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.582 [2024-11-03 15:51:46.228750] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.582 [2024-11-03 15:51:46.228760] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:08.582 [2024-11-03 15:51:46.231430] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.582 [2024-11-03 15:51:46.239632] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.582 [2024-11-03 15:51:46.283638] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:09.774 7531.43 IOPS, 29.42 MiB/s [2024-11-03T14:51:48.497Z] 8904.00 IOPS, 34.78 MiB/s [2024-11-03T14:51:49.870Z] 9971.44 IOPS, 38.95 MiB/s [2024-11-03T14:51:50.501Z] 10824.80 IOPS, 42.28 MiB/s [2024-11-03T14:51:51.883Z] 11522.27 IOPS, 45.01 MiB/s [2024-11-03T14:51:52.817Z] 12103.33 IOPS, 47.28 MiB/s [2024-11-03T14:51:53.751Z] 12598.23 IOPS, 49.21 MiB/s [2024-11-03T14:51:54.685Z] 13023.36 IOPS, 50.87 MiB/s [2024-11-03T14:51:54.685Z] 13387.60 IOPS, 52.30 MiB/s 00:33:16.895 Latency(us) 00:33:16.895 [2024-11-03T14:51:54.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:16.896 Verification LBA range: start 0x0 length 0x4000 00:33:16.896 Nvme1n1 : 15.00 13389.11 52.30 10580.09 0.00 5321.69 365.36 1046898.28 00:33:16.896 [2024-11-03T14:51:54.686Z] =================================================================================================================== 00:33:16.896 [2024-11-03T14:51:54.686Z] Total : 13389.11 52.30 10580.09 0.00 5321.69 365.36 1046898.28 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.896 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:17.154 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:17.154 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:17.154 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:17.154 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:17.154 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:17.154 rmmod nvme_rdma 00:33:17.155 rmmod nvme_fabrics 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2475415 ']' 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2475415 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2475415 ']' 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2475415 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2475415 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2475415' 00:33:17.155 killing process with pid 2475415 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2475415 00:33:17.155 15:51:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2475415 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:17.413 00:33:17.413 real 0m24.806s 00:33:17.413 user 1m2.013s 00:33:17.413 sys 0m6.392s 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.413 ************************************ 00:33:17.413 END TEST nvmf_bdevperf 00:33:17.413 ************************************ 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:17.413 15:51:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:17.414 15:51:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.414 ************************************ 00:33:17.414 START TEST nvmf_target_disconnect 00:33:17.414 ************************************ 00:33:17.414 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:17.414 * Looking for test storage... 00:33:17.414 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:17.414 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:17.414 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:17.414 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:17.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.674 --rc genhtml_branch_coverage=1 00:33:17.674 --rc genhtml_function_coverage=1 00:33:17.674 --rc genhtml_legend=1 00:33:17.674 --rc geninfo_all_blocks=1 00:33:17.674 --rc geninfo_unexecuted_blocks=1 00:33:17.674 00:33:17.674 ' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:17.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.674 --rc genhtml_branch_coverage=1 00:33:17.674 --rc genhtml_function_coverage=1 00:33:17.674 --rc genhtml_legend=1 00:33:17.674 --rc geninfo_all_blocks=1 00:33:17.674 --rc geninfo_unexecuted_blocks=1 00:33:17.674 00:33:17.674 ' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:17.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.674 --rc genhtml_branch_coverage=1 00:33:17.674 --rc genhtml_function_coverage=1 00:33:17.674 --rc genhtml_legend=1 00:33:17.674 --rc geninfo_all_blocks=1 00:33:17.674 --rc geninfo_unexecuted_blocks=1 00:33:17.674 00:33:17.674 ' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:17.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.674 --rc genhtml_branch_coverage=1 00:33:17.674 --rc genhtml_function_coverage=1 00:33:17.674 --rc genhtml_legend=1 00:33:17.674 --rc geninfo_all_blocks=1 00:33:17.674 --rc geninfo_unexecuted_blocks=1 00:33:17.674 00:33:17.674 ' 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.674 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.675 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.675 15:51:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:24.242 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:24.242 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:24.242 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.242 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:24.243 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:24.243 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:24.243 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:24.243 altname enp217s0f0np0 00:33:24.243 altname ens818f0np0 00:33:24.243 inet 192.168.100.8/24 scope global mlx_0_0 00:33:24.243 valid_lft forever preferred_lft forever 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:24.243 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:24.243 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:24.243 altname enp217s0f1np1 00:33:24.243 altname ens818f1np1 00:33:24.243 inet 192.168.100.9/24 scope global mlx_0_1 00:33:24.243 valid_lft forever preferred_lft forever 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:24.243 192.168.100.9' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:24.243 192.168.100.9' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:24.243 192.168.100.9' 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:24.243 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:24.244 15:52:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:24.244 ************************************ 00:33:24.244 START TEST nvmf_target_disconnect_tc1 00:33:24.244 ************************************ 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:33:24.244 15:52:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:24.502 [2024-11-03 15:52:02.144467] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:24.502 [2024-11-03 15:52:02.144506] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:24.502 [2024-11-03 15:52:02.144515] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:33:25.438 [2024-11-03 15:52:03.148467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:33:25.438 [2024-11-03 15:52:03.148502] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:33:25.438 [2024-11-03 15:52:03.148517] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:33:25.438 [2024-11-03 15:52:03.148549] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:25.438 [2024-11-03 15:52:03.148561] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:25.438 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:33:25.438 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:25.438 Initializing NVMe Controllers 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:25.438 00:33:25.438 real 0m1.146s 00:33:25.438 user 0m0.893s 00:33:25.438 sys 0m0.242s 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:25.438 ************************************ 00:33:25.438 END TEST nvmf_target_disconnect_tc1 00:33:25.438 ************************************ 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:25.438 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.697 ************************************ 00:33:25.697 START TEST nvmf_target_disconnect_tc2 00:33:25.697 ************************************ 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2480480 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2480480 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2480480 ']' 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:25.697 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.697 [2024-11-03 15:52:03.304472] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:25.697 [2024-11-03 15:52:03.304519] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.697 [2024-11-03 15:52:03.394942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:25.697 [2024-11-03 15:52:03.417313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.697 [2024-11-03 15:52:03.417353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.697 [2024-11-03 15:52:03.417363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.697 [2024-11-03 15:52:03.417371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.697 [2024-11-03 15:52:03.417378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.697 [2024-11-03 15:52:03.419190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:25.697 [2024-11-03 15:52:03.419299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:25.697 [2024-11-03 15:52:03.419405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:25.697 [2024-11-03 15:52:03.419407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.956 Malloc0 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.956 [2024-11-03 15:52:03.608907] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae3640/0xaef2d0) succeed. 00:33:25.956 [2024-11-03 15:52:03.618357] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae4c80/0xb30970) succeed. 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.956 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.214 [2024-11-03 15:52:03.766480] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.214 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.215 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2480512 00:33:26.215 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:26.215 15:52:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:28.115 15:52:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2480480 00:33:28.115 15:52:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Write completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 Read completed with error (sct=0, sc=8) 00:33:29.490 starting I/O failed 00:33:29.490 [2024-11-03 15:52:06.972642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:30.056 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2480480 Killed "${NVMF_APP[@]}" "$@" 00:33:30.056 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2481297 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2481297 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2481297 ']' 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:30.057 15:52:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.057 [2024-11-03 15:52:07.845045] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:30.057 [2024-11-03 15:52:07.845100] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.316 [2024-11-03 15:52:07.938582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.316 [2024-11-03 15:52:07.960001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.316 [2024-11-03 15:52:07.960039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.316 [2024-11-03 15:52:07.960050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.316 [2024-11-03 15:52:07.960058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.316 [2024-11-03 15:52:07.960066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.316 [2024-11-03 15:52:07.961794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:30.316 [2024-11-03 15:52:07.961905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:30.316 [2024-11-03 15:52:07.962013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:30.316 [2024-11-03 15:52:07.962014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Write completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 Read completed with error (sct=0, sc=8) 00:33:30.316 starting I/O failed 00:33:30.316 [2024-11-03 15:52:07.977848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.316 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.575 Malloc0 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.575 [2024-11-03 15:52:08.175142] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5d3640/0x5df2d0) succeed. 00:33:30.575 [2024-11-03 15:52:08.184625] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5d4c80/0x620970) succeed. 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.575 [2024-11-03 15:52:08.327015] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:30.575 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.576 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:30.576 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.576 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.576 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.576 15:52:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2480512 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Write completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.511 starting I/O failed 00:33:31.511 Read completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 Read completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 Read completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 Write completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 Write completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 Write completed with error (sct=0, sc=8) 00:33:31.512 starting I/O failed 00:33:31.512 [2024-11-03 15:52:08.982723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 [2024-11-03 15:52:08.993029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:08.993080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:08.993102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:08.993113] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:08.993122] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.002995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.012838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.012882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.012901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.012915] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.012924] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.023196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.032866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.032909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.032928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.032938] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.032946] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.043154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.052923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.052975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.052994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.053004] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.053013] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.063263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.073034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.073083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.073101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.073111] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.073120] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.083171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.093106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.093148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.093167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.093177] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.093186] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.103272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.113076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.113119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.113137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.113147] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.113156] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.123500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.133090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.133132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.133150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.133160] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.133169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.143407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.153320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.153361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.153379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.153389] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.153398] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.163671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.173337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.173377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.173395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.173405] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.173413] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.183548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.512 qpair failed and we were unable to recover it. 00:33:31.512 [2024-11-03 15:52:09.193389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.512 [2024-11-03 15:52:09.193429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.512 [2024-11-03 15:52:09.193448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.512 [2024-11-03 15:52:09.193457] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.512 [2024-11-03 15:52:09.193466] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.512 [2024-11-03 15:52:09.203639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.513 qpair failed and we were unable to recover it. 00:33:31.513 [2024-11-03 15:52:09.213348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.513 [2024-11-03 15:52:09.213391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.513 [2024-11-03 15:52:09.213409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.513 [2024-11-03 15:52:09.213419] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.513 [2024-11-03 15:52:09.213428] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.513 [2024-11-03 15:52:09.223670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.513 qpair failed and we were unable to recover it. 00:33:31.513 [2024-11-03 15:52:09.233412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.513 [2024-11-03 15:52:09.233454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.513 [2024-11-03 15:52:09.233473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.513 [2024-11-03 15:52:09.233482] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.513 [2024-11-03 15:52:09.233491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.513 [2024-11-03 15:52:09.243744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.513 qpair failed and we were unable to recover it. 00:33:31.513 [2024-11-03 15:52:09.253554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.513 [2024-11-03 15:52:09.253604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.513 [2024-11-03 15:52:09.253623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.513 [2024-11-03 15:52:09.253634] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.513 [2024-11-03 15:52:09.253642] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.513 [2024-11-03 15:52:09.263662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.513 qpair failed and we were unable to recover it. 00:33:31.513 [2024-11-03 15:52:09.273720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.513 [2024-11-03 15:52:09.273758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.513 [2024-11-03 15:52:09.273780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.513 [2024-11-03 15:52:09.273790] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.513 [2024-11-03 15:52:09.273798] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.513 [2024-11-03 15:52:09.283755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.513 qpair failed and we were unable to recover it. 00:33:31.513 [2024-11-03 15:52:09.293632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.513 [2024-11-03 15:52:09.293674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.513 [2024-11-03 15:52:09.293693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.513 [2024-11-03 15:52:09.293702] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.513 [2024-11-03 15:52:09.293711] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.771 [2024-11-03 15:52:09.304030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.771 qpair failed and we were unable to recover it. 00:33:31.771 [2024-11-03 15:52:09.313699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.771 [2024-11-03 15:52:09.313744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.771 [2024-11-03 15:52:09.313762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.771 [2024-11-03 15:52:09.313771] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.771 [2024-11-03 15:52:09.313781] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.771 [2024-11-03 15:52:09.323833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.771 qpair failed and we were unable to recover it. 00:33:31.771 [2024-11-03 15:52:09.333697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.771 [2024-11-03 15:52:09.333742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.771 [2024-11-03 15:52:09.333760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.771 [2024-11-03 15:52:09.333769] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.771 [2024-11-03 15:52:09.333778] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.771 [2024-11-03 15:52:09.344061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.771 qpair failed and we were unable to recover it. 00:33:31.771 [2024-11-03 15:52:09.353816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.771 [2024-11-03 15:52:09.353860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.771 [2024-11-03 15:52:09.353878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.771 [2024-11-03 15:52:09.353892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.771 [2024-11-03 15:52:09.353900] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.771 [2024-11-03 15:52:09.363991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.771 qpair failed and we were unable to recover it. 00:33:31.771 [2024-11-03 15:52:09.373878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.771 [2024-11-03 15:52:09.373923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.771 [2024-11-03 15:52:09.373941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.771 [2024-11-03 15:52:09.373950] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.373959] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.384398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.393922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.393973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.393992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.394001] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.394011] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.404223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.414084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.414120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.414138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.414148] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.414157] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.424279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.434067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.434108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.434126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.434135] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.434144] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.444251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.454137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.454183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.454201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.454211] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.454220] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.464372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.474213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.474256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.474275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.474284] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.474293] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.484488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.494347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.494390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.494409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.494419] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.494428] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.504555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.514395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.514437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.514455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.514465] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.514473] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.524548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.534425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.534466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.534484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.534493] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.534502] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:31.772 [2024-11-03 15:52:09.544587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.772 qpair failed and we were unable to recover it. 00:33:31.772 [2024-11-03 15:52:09.554505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.772 [2024-11-03 15:52:09.554546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.772 [2024-11-03 15:52:09.554564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.772 [2024-11-03 15:52:09.554574] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.772 [2024-11-03 15:52:09.554583] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.029 [2024-11-03 15:52:09.564612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.029 qpair failed and we were unable to recover it. 00:33:32.029 [2024-11-03 15:52:09.574595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.029 [2024-11-03 15:52:09.574635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.029 [2024-11-03 15:52:09.574653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.029 [2024-11-03 15:52:09.574662] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.029 [2024-11-03 15:52:09.574671] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.029 [2024-11-03 15:52:09.584851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.029 qpair failed and we were unable to recover it. 00:33:32.029 [2024-11-03 15:52:09.594740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.029 [2024-11-03 15:52:09.594782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.029 [2024-11-03 15:52:09.594800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.594810] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.594819] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.605017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.614714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.614755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.614779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.614789] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.614798] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.625056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.634832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.634871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.634890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.634900] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.634909] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.644955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.654846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.654884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.654902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.654912] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.654920] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.665219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.674963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.675009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.675028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.675037] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.675046] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.685346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.695112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.695153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.695172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.695185] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.695193] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.705595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.715077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.715122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.715141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.715151] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.715160] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.725211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.735090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.735132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.735150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.735160] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.735169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.745346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.755133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.755175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.755193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.755203] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.755212] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.765499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.775328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.775370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.775388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.775397] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.775406] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.785682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.795388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.795435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.795453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.795463] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.795472] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.030 [2024-11-03 15:52:09.805668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.030 qpair failed and we were unable to recover it. 00:33:32.030 [2024-11-03 15:52:09.815374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.030 [2024-11-03 15:52:09.815416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.030 [2024-11-03 15:52:09.815434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.030 [2024-11-03 15:52:09.815444] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.030 [2024-11-03 15:52:09.815452] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.825704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.835497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.835539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.835557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.835566] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.835575] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.845726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.855588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.855631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.855649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.855659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.855667] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.865838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.875789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.875837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.875855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.875865] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.875874] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.885887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.895614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.895659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.895677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.895687] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.895695] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.905946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.915617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.915656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.915674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.915684] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.915692] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.925804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.935717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.935759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.935776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.935785] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.935794] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.288 [2024-11-03 15:52:09.946156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.288 qpair failed and we were unable to recover it. 00:33:32.288 [2024-11-03 15:52:09.955908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.288 [2024-11-03 15:52:09.955947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.288 [2024-11-03 15:52:09.955974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.288 [2024-11-03 15:52:09.955984] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.288 [2024-11-03 15:52:09.955992] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:09.966140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:09.975816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:09.975859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:09.975876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:09.975886] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:09.975895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:09.986268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:09.995777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:09.995813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:09.995831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:09.995841] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:09.995851] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:10.006138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:10.015856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:10.015900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:10.015919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:10.015930] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:10.015939] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:10.026283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:10.036105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:10.036153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:10.036172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:10.036181] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:10.036193] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:10.046307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:10.056073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:10.056117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:10.056136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:10.056146] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:10.056155] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.289 [2024-11-03 15:52:10.066558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.289 qpair failed and we were unable to recover it. 00:33:32.289 [2024-11-03 15:52:10.076084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.289 [2024-11-03 15:52:10.076129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.289 [2024-11-03 15:52:10.076147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.289 [2024-11-03 15:52:10.076156] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.289 [2024-11-03 15:52:10.076165] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.547 [2024-11-03 15:52:10.086464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-11-03 15:52:10.096281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-11-03 15:52:10.096324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-11-03 15:52:10.096343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-11-03 15:52:10.096353] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-11-03 15:52:10.096362] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.547 [2024-11-03 15:52:10.106641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.116261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.116308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.116326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.116335] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.116344] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.126758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.136359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.136403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.136422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.136431] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.136439] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.146703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.156458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.156495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.156514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.156524] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.156532] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.166754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.176444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.176487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.176505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.176515] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.176523] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.186890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.196455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.196503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.196521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.196531] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.196540] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.206842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.216551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.216593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.216611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.216621] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.216630] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.226889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.236589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.236633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.236651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.236661] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.236669] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.246899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.256705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.256747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.256765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.256775] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.256783] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.266954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.276708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.276751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.276769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.276778] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.276787] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.286975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.296726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.296766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.296788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.296797] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.296806] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.307119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.548 [2024-11-03 15:52:10.316827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.548 [2024-11-03 15:52:10.316865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-11-03 15:52:10.316883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-11-03 15:52:10.316892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-11-03 15:52:10.316901] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.548 [2024-11-03 15:52:10.327223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.336825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.336865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.336883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.336893] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.336901] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.347350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.356900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.356938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.356957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.356972] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.356981] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.367230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.376929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.376986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.377004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.377014] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.377026] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.387373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.397042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.397083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.397102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.397112] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.397120] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.407521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.417094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.417138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.417157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.417166] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.417175] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.427370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.437173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.437217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.437235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.437245] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.437253] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.447505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.457208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.457255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.457274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.457283] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.457292] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.467426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.477373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.477418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.477436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.477446] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.477455] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.487583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.497360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.497402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.497421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.497431] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.497440] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.507723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.517451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.517494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.517512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.517522] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.517530] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.527709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.537425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.537468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.537487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.537497] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.537505] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.547937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.557609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.807 [2024-11-03 15:52:10.557656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.807 [2024-11-03 15:52:10.557674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.807 [2024-11-03 15:52:10.557683] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.807 [2024-11-03 15:52:10.557692] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.807 [2024-11-03 15:52:10.567892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.807 qpair failed and we were unable to recover it. 00:33:32.807 [2024-11-03 15:52:10.577353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.808 [2024-11-03 15:52:10.577398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.808 [2024-11-03 15:52:10.577415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.808 [2024-11-03 15:52:10.577425] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.808 [2024-11-03 15:52:10.577433] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:32.808 [2024-11-03 15:52:10.587854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:32.808 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.597639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.597683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.597702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.597711] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.597720] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.607889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.617442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.617487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.617505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.617515] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.617524] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.627989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.637685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.637729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.637748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.637762] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.637772] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.647987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.657539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.657582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.657600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.657610] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.657618] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.668034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.677816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.677862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.677880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.677890] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.677899] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.688043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.697703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.697747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.697766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.697775] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.697784] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.707973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.717863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.717907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.717925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.717935] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-11-03 15:52:10.717947] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.066 [2024-11-03 15:52:10.728057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-11-03 15:52:10.737801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-11-03 15:52:10.737844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-11-03 15:52:10.737862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-11-03 15:52:10.737872] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.737881] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.748191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-11-03 15:52:10.757974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-11-03 15:52:10.758015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-11-03 15:52:10.758033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-11-03 15:52:10.758042] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.758052] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.768301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-11-03 15:52:10.777976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-11-03 15:52:10.778019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-11-03 15:52:10.778037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-11-03 15:52:10.778046] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.778056] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.788117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-11-03 15:52:10.798039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-11-03 15:52:10.798083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-11-03 15:52:10.798102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-11-03 15:52:10.798112] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.798122] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.808352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-11-03 15:52:10.818144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-11-03 15:52:10.818184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-11-03 15:52:10.818202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-11-03 15:52:10.818212] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.818220] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.828360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-11-03 15:52:10.838234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-11-03 15:52:10.838280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-11-03 15:52:10.838298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-11-03 15:52:10.838308] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-11-03 15:52:10.838317] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.067 [2024-11-03 15:52:10.848472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.325 [2024-11-03 15:52:10.858137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.325 [2024-11-03 15:52:10.858182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.325 [2024-11-03 15:52:10.858201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.325 [2024-11-03 15:52:10.858211] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.325 [2024-11-03 15:52:10.858220] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.325 [2024-11-03 15:52:10.868466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.325 qpair failed and we were unable to recover it. 00:33:33.325 [2024-11-03 15:52:10.878278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.325 [2024-11-03 15:52:10.878323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.325 [2024-11-03 15:52:10.878341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.325 [2024-11-03 15:52:10.878350] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.325 [2024-11-03 15:52:10.878359] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.325 [2024-11-03 15:52:10.888651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.325 qpair failed and we were unable to recover it. 00:33:33.325 [2024-11-03 15:52:10.898282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.325 [2024-11-03 15:52:10.898324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.325 [2024-11-03 15:52:10.898345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.325 [2024-11-03 15:52:10.898355] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.325 [2024-11-03 15:52:10.898364] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.325 [2024-11-03 15:52:10.908593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.325 qpair failed and we were unable to recover it. 00:33:33.325 [2024-11-03 15:52:10.918321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.325 [2024-11-03 15:52:10.918360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.325 [2024-11-03 15:52:10.918379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.325 [2024-11-03 15:52:10.918388] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.325 [2024-11-03 15:52:10.918397] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.325 [2024-11-03 15:52:10.928689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.325 qpair failed and we were unable to recover it. 00:33:33.325 [2024-11-03 15:52:10.938373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.325 [2024-11-03 15:52:10.938417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.325 [2024-11-03 15:52:10.938436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.325 [2024-11-03 15:52:10.938445] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.325 [2024-11-03 15:52:10.938455] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:10.948817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:10.958516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:10.958559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:10.958577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:10.958587] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:10.958596] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:10.968852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:10.978500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:10.978541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:10.978559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:10.978573] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:10.978581] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:10.988890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:10.998748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:10.998789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:10.998807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:10.998818] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:10.998827] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.008829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:11.018720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:11.018760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:11.018778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:11.018789] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:11.018797] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.028873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:11.038775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:11.038815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:11.038833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:11.038842] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:11.038851] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.049054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:11.058846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:11.058885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:11.058903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:11.058913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:11.058922] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.069178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:11.078935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:11.078984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:11.079002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:11.079011] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:11.079020] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.089394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.326 [2024-11-03 15:52:11.098993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.326 [2024-11-03 15:52:11.099034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.326 [2024-11-03 15:52:11.099052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.326 [2024-11-03 15:52:11.099062] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.326 [2024-11-03 15:52:11.099071] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.326 [2024-11-03 15:52:11.109201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.326 qpair failed and we were unable to recover it. 00:33:33.584 [2024-11-03 15:52:11.119024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.584 [2024-11-03 15:52:11.119063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.584 [2024-11-03 15:52:11.119081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.584 [2024-11-03 15:52:11.119090] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.584 [2024-11-03 15:52:11.119099] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.584 [2024-11-03 15:52:11.129314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.584 qpair failed and we were unable to recover it. 00:33:33.584 [2024-11-03 15:52:11.139218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.584 [2024-11-03 15:52:11.139258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.584 [2024-11-03 15:52:11.139276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.584 [2024-11-03 15:52:11.139286] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.584 [2024-11-03 15:52:11.139294] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.584 [2024-11-03 15:52:11.149418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.584 qpair failed and we were unable to recover it. 00:33:33.584 [2024-11-03 15:52:11.159185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.584 [2024-11-03 15:52:11.159230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.584 [2024-11-03 15:52:11.159247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.584 [2024-11-03 15:52:11.159257] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.584 [2024-11-03 15:52:11.159265] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.169316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.179325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.179371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.179389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.179398] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.179407] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.189423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.199251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.199294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.199313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.199322] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.199331] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.209460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.219501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.219543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.219562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.219571] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.219580] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.229633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.239443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.239483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.239504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.239514] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.239523] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.249627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.259555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.259596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.259614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.259623] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.259632] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.269659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.279584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.279624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.279642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.279651] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.279660] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.289695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.299628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.299670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.299688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.299697] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.299706] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.309769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.319740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.319777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.319795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.319808] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.319817] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.329936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.339651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.339688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.339705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.339715] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.339724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.349802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.585 [2024-11-03 15:52:11.359774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.585 [2024-11-03 15:52:11.359817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.585 [2024-11-03 15:52:11.359835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.585 [2024-11-03 15:52:11.359845] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.585 [2024-11-03 15:52:11.359854] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.585 [2024-11-03 15:52:11.370031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.585 qpair failed and we were unable to recover it. 00:33:33.843 [2024-11-03 15:52:11.379883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.843 [2024-11-03 15:52:11.379927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.843 [2024-11-03 15:52:11.379945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.843 [2024-11-03 15:52:11.379954] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.379963] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.390041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.399991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.400030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.400048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.400057] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.400066] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.410366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.420054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.420091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.420109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.420118] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.420127] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.430298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.440030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.440072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.440089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.440099] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.440108] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.450295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.460151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.460191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.460209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.460219] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.460227] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.470330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.480161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.480202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.480220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.480230] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.480239] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.490314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.502107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.502152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.502169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.502179] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.502188] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.510504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.520246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.520284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.520302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.520312] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.520320] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.530463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.540182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.540222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.540240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.540249] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.540258] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.550539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.560456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.560502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.560520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.560530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.560538] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.570673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.580519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.580565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.580587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.580596] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.580605] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.590711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.600494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.600530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.600548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.600558] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.600566] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.610824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:33.844 [2024-11-03 15:52:11.620456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.844 [2024-11-03 15:52:11.620496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.844 [2024-11-03 15:52:11.620514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.844 [2024-11-03 15:52:11.620523] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.844 [2024-11-03 15:52:11.620532] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:33.844 [2024-11-03 15:52:11.630706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.844 qpair failed and we were unable to recover it. 00:33:34.102 [2024-11-03 15:52:11.640645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.102 [2024-11-03 15:52:11.640686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.102 [2024-11-03 15:52:11.640704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.640714] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.640722] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.650898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.660639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.660686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.660703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.660713] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.660724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.670806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.680733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.680769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.680787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.680796] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.680805] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.690847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.700724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.700768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.700786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.700795] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.700804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.710879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.720875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.720917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.720934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.720944] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.720952] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.731010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.740919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.740965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.740989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.740998] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.741007] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.751289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.760770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.760808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.760826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.760835] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.760844] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.771149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.780920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.780961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.780985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.780994] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.781003] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.791179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.800987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.801031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.801049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.801059] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.801067] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.811111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.821027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.821070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.821088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.821097] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.821105] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.831211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.841067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.841107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.841124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.841133] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.841142] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.851428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.861130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.861169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.861187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.861197] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.861206] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.103 [2024-11-03 15:52:11.871525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.103 qpair failed and we were unable to recover it. 00:33:34.103 [2024-11-03 15:52:11.881272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.103 [2024-11-03 15:52:11.881316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.103 [2024-11-03 15:52:11.881333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.103 [2024-11-03 15:52:11.881343] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.103 [2024-11-03 15:52:11.881351] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.891598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:11.901332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:11.901370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:11.901388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:11.901397] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:11.901406] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.911500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:11.921343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:11.921388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:11.921410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:11.921419] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:11.921428] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.931659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:11.941538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:11.941577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:11.941596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:11.941606] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:11.941616] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.951719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:11.961553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:11.961601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:11.961618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:11.961628] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:11.961637] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.971867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:11.981583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:11.981620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:11.981638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:11.981648] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:11.981658] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:11.991910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.001725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.001763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.001781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.001791] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:12.001804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:12.011894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.021687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.021729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.021747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.021757] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:12.021766] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:12.031940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.041827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.041868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.041886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.041895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:12.041904] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:12.052098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.061702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.061740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.061759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.061768] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:12.061777] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:12.072059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.081875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.081914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.081932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.081941] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.362 [2024-11-03 15:52:12.081950] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.362 [2024-11-03 15:52:12.092276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.362 qpair failed and we were unable to recover it. 00:33:34.362 [2024-11-03 15:52:12.101865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.362 [2024-11-03 15:52:12.101907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.362 [2024-11-03 15:52:12.101925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.362 [2024-11-03 15:52:12.101934] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.363 [2024-11-03 15:52:12.101943] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.363 [2024-11-03 15:52:12.112249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.363 qpair failed and we were unable to recover it. 00:33:34.363 [2024-11-03 15:52:12.121987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.363 [2024-11-03 15:52:12.122030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.363 [2024-11-03 15:52:12.122048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.363 [2024-11-03 15:52:12.122058] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.363 [2024-11-03 15:52:12.122066] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.363 [2024-11-03 15:52:12.132260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.363 qpair failed and we were unable to recover it. 00:33:34.363 [2024-11-03 15:52:12.142195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.363 [2024-11-03 15:52:12.142237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.363 [2024-11-03 15:52:12.142254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.363 [2024-11-03 15:52:12.142264] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.363 [2024-11-03 15:52:12.142273] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.152337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.162162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.162201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.162218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.162228] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.162237] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.172495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.182156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.182203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.182220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.182230] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.182238] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.192454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.202363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.202408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.202426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.202436] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.202445] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.212688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.222216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.222259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.222277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.222287] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.222295] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.232749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.242356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.242403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.242421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.242431] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.242440] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.252710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.262452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.262496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.262517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.262527] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.262535] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.272652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.282466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.282504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.282522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.282532] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.282541] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.292837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.302471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.302511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.302529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.302538] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.621 [2024-11-03 15:52:12.302547] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.621 [2024-11-03 15:52:12.312867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.621 qpair failed and we were unable to recover it. 00:33:34.621 [2024-11-03 15:52:12.322624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.621 [2024-11-03 15:52:12.322662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.621 [2024-11-03 15:52:12.322679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.621 [2024-11-03 15:52:12.322690] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.622 [2024-11-03 15:52:12.322698] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.622 [2024-11-03 15:52:12.333005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.622 qpair failed and we were unable to recover it. 00:33:34.622 [2024-11-03 15:52:12.342564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.622 [2024-11-03 15:52:12.342606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.622 [2024-11-03 15:52:12.342625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.622 [2024-11-03 15:52:12.342634] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.622 [2024-11-03 15:52:12.342647] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.622 [2024-11-03 15:52:12.352924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.622 qpair failed and we were unable to recover it. 00:33:34.622 [2024-11-03 15:52:12.362694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.622 [2024-11-03 15:52:12.362738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.622 [2024-11-03 15:52:12.362756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.622 [2024-11-03 15:52:12.362765] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.622 [2024-11-03 15:52:12.362774] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.622 [2024-11-03 15:52:12.372884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.622 qpair failed and we were unable to recover it. 00:33:34.622 [2024-11-03 15:52:12.382759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.622 [2024-11-03 15:52:12.382798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.622 [2024-11-03 15:52:12.382815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.622 [2024-11-03 15:52:12.382825] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.622 [2024-11-03 15:52:12.382833] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.622 [2024-11-03 15:52:12.392957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.622 qpair failed and we were unable to recover it. 00:33:34.622 [2024-11-03 15:52:12.402847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.622 [2024-11-03 15:52:12.402889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.622 [2024-11-03 15:52:12.402906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.622 [2024-11-03 15:52:12.402916] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.622 [2024-11-03 15:52:12.402925] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.413169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.422784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.422826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.422844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.422854] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.422862] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.433164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.442977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.443016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.443034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.443044] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.443052] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.453386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.463035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.463080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.463097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.463106] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.463115] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.473135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.483008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.483050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.483067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.483077] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.483085] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.493252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.503139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.503180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.503199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.503209] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.503217] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.513343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.523165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.523210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.523233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.523242] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.523251] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.533459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.543405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.543449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.543467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.543476] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.543485] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.553464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.880 [2024-11-03 15:52:12.563329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.880 [2024-11-03 15:52:12.563365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.880 [2024-11-03 15:52:12.563383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.880 [2024-11-03 15:52:12.563393] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.880 [2024-11-03 15:52:12.563401] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.880 [2024-11-03 15:52:12.573781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.880 qpair failed and we were unable to recover it. 00:33:34.881 [2024-11-03 15:52:12.583337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.881 [2024-11-03 15:52:12.583379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.881 [2024-11-03 15:52:12.583396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.881 [2024-11-03 15:52:12.583406] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.881 [2024-11-03 15:52:12.583414] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.881 [2024-11-03 15:52:12.593872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.881 qpair failed and we were unable to recover it. 00:33:34.881 [2024-11-03 15:52:12.603355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.881 [2024-11-03 15:52:12.603398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.881 [2024-11-03 15:52:12.603417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.881 [2024-11-03 15:52:12.603430] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.881 [2024-11-03 15:52:12.603439] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.881 [2024-11-03 15:52:12.613718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.881 qpair failed and we were unable to recover it. 00:33:34.881 [2024-11-03 15:52:12.623437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.881 [2024-11-03 15:52:12.623480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.881 [2024-11-03 15:52:12.623498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.881 [2024-11-03 15:52:12.623507] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.881 [2024-11-03 15:52:12.623515] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.881 [2024-11-03 15:52:12.633929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.881 qpair failed and we were unable to recover it. 00:33:34.881 [2024-11-03 15:52:12.643517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.881 [2024-11-03 15:52:12.643556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.881 [2024-11-03 15:52:12.643575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.881 [2024-11-03 15:52:12.643584] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.881 [2024-11-03 15:52:12.643593] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:34.881 [2024-11-03 15:52:12.653889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:34.881 qpair failed and we were unable to recover it. 00:33:34.881 [2024-11-03 15:52:12.663587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.881 [2024-11-03 15:52:12.663628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.881 [2024-11-03 15:52:12.663646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.881 [2024-11-03 15:52:12.663655] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.881 [2024-11-03 15:52:12.663664] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.673981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.683694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.683738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.683755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.683765] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.683773] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.693906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.703740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.703778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.703796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.703806] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.703814] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.714028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.723674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.723713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.723731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.723740] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.723749] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.734128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.743905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.743944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.743962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.743977] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.743986] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.754039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.763952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.763996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.764013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.764023] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.764031] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.139 [2024-11-03 15:52:12.774281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.139 qpair failed and we were unable to recover it. 00:33:35.139 [2024-11-03 15:52:12.783982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.139 [2024-11-03 15:52:12.784026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.139 [2024-11-03 15:52:12.784043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.139 [2024-11-03 15:52:12.784053] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.139 [2024-11-03 15:52:12.784062] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.794311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.804045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.804089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.804107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.804116] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.804125] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.814465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.824083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.824124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.824143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.824152] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.824161] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.834456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.844172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.844218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.844235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.844244] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.844253] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.854455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.864291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.864336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.864357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.864366] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.864375] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.874561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.884278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.884316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.884335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.884344] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.884353] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.894629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.904486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.904528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.904546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.904556] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.904564] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.140 [2024-11-03 15:52:12.914703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.140 qpair failed and we were unable to recover it. 00:33:35.140 [2024-11-03 15:52:12.924595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.140 [2024-11-03 15:52:12.924643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.140 [2024-11-03 15:52:12.924662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.140 [2024-11-03 15:52:12.924671] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.140 [2024-11-03 15:52:12.924680] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.398 [2024-11-03 15:52:12.934799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.398 qpair failed and we were unable to recover it. 00:33:35.398 [2024-11-03 15:52:12.944522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.398 [2024-11-03 15:52:12.944563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.398 [2024-11-03 15:52:12.944580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.398 [2024-11-03 15:52:12.944593] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.398 [2024-11-03 15:52:12.944602] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.398 [2024-11-03 15:52:12.954776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.398 qpair failed and we were unable to recover it. 00:33:35.398 [2024-11-03 15:52:12.964594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.398 [2024-11-03 15:52:12.964631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.398 [2024-11-03 15:52:12.964649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.398 [2024-11-03 15:52:12.964659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.398 [2024-11-03 15:52:12.964667] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.398 [2024-11-03 15:52:12.974806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.398 qpair failed and we were unable to recover it. 00:33:35.398 [2024-11-03 15:52:12.984559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.398 [2024-11-03 15:52:12.984603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.398 [2024-11-03 15:52:12.984621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.398 [2024-11-03 15:52:12.984630] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.398 [2024-11-03 15:52:12.984639] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.398 [2024-11-03 15:52:12.994872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.398 qpair failed and we were unable to recover it. 00:33:35.398 [2024-11-03 15:52:13.004732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.398 [2024-11-03 15:52:13.004778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.398 [2024-11-03 15:52:13.004796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.398 [2024-11-03 15:52:13.004808] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.398 [2024-11-03 15:52:13.004819] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.398 [2024-11-03 15:52:13.014928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.024747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.024791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.024809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.024818] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.024827] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.035024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.044772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.044813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.044831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.044840] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.044849] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.054937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.064792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.064832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.064850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.064860] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.064869] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.075020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.085018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.085064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.085083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.085093] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.085102] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.095193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.104951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.104998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.105017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.105026] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.105036] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.115229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.124948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.124995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.125013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.125022] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.125031] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.135249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.145090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.145130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.145148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.145158] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.145167] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.155403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.165111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.165152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.165170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.165179] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.165189] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.399 [2024-11-03 15:52:13.175424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.399 qpair failed and we were unable to recover it. 00:33:35.399 [2024-11-03 15:52:13.185177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.399 [2024-11-03 15:52:13.185219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.399 [2024-11-03 15:52:13.185236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.399 [2024-11-03 15:52:13.185246] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.399 [2024-11-03 15:52:13.185255] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.657 [2024-11-03 15:52:13.195445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.657 qpair failed and we were unable to recover it. 00:33:35.657 [2024-11-03 15:52:13.205340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.657 [2024-11-03 15:52:13.205379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.657 [2024-11-03 15:52:13.205401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.657 [2024-11-03 15:52:13.205411] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.657 [2024-11-03 15:52:13.205420] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.657 [2024-11-03 15:52:13.215496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.657 qpair failed and we were unable to recover it. 00:33:35.657 [2024-11-03 15:52:13.225221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.657 [2024-11-03 15:52:13.225261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.657 [2024-11-03 15:52:13.225279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.657 [2024-11-03 15:52:13.225288] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.657 [2024-11-03 15:52:13.225297] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.657 [2024-11-03 15:52:13.235510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.657 qpair failed and we were unable to recover it. 00:33:35.657 [2024-11-03 15:52:13.245411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.657 [2024-11-03 15:52:13.245453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.657 [2024-11-03 15:52:13.245470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.657 [2024-11-03 15:52:13.245480] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.657 [2024-11-03 15:52:13.245488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.657 [2024-11-03 15:52:13.255750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.657 qpair failed and we were unable to recover it. 00:33:35.657 [2024-11-03 15:52:13.265421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.657 [2024-11-03 15:52:13.265461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.265478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.265488] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.265497] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.275601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.285599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.285641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.285659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.285671] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.285681] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.295756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.305480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.305520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.305538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.305547] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.305555] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.315654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.325451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.325489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.325507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.325517] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.325525] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.335875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.345572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.345612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.345629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.345638] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.345647] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.356082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.365774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.365816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.365834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.365844] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.365853] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.375887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.385716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.385758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.385775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.385785] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.385794] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.396100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.405898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.405938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.405956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.405971] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.405980] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.416135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.425867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.425913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.425931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.425941] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.425950] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.658 [2024-11-03 15:52:13.436132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.658 qpair failed and we were unable to recover it. 00:33:35.658 [2024-11-03 15:52:13.445816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.658 [2024-11-03 15:52:13.445853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.658 [2024-11-03 15:52:13.445871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.658 [2024-11-03 15:52:13.445880] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.658 [2024-11-03 15:52:13.445889] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.456236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.466031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.466074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.466092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.466102] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.466110] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.476342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.486072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.486109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.486127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.486136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.486145] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.496365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.505983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.506024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.506043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.506053] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.506063] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.516409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.526134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.526177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.526195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.526205] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.526214] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.536376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.546188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.546232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.546254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.546264] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.546272] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.556543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.566354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.566393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.566410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.566419] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.566428] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.576519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.586292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.586330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.586348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.586357] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.586366] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.596589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.606327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.606374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.606392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.606403] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.606412] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.616694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.626461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.626503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.626522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.626531] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.626544] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.636733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.646539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.646583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.646601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.646611] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.646620] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.656871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.666622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.666668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.666687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.666697] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.666707] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.917 [2024-11-03 15:52:13.676866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.917 qpair failed and we were unable to recover it. 00:33:35.917 [2024-11-03 15:52:13.686599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.917 [2024-11-03 15:52:13.686637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.917 [2024-11-03 15:52:13.686655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.917 [2024-11-03 15:52:13.686665] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.917 [2024-11-03 15:52:13.686673] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:35.918 [2024-11-03 15:52:13.696783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:35.918 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.706694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.706735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.706753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.706763] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.706771] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.716942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.726692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.726732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.726750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.726759] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.726768] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.737054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.746768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.746812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.746830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.746840] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.746849] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.757134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.766818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.766859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.766877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.766887] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.766896] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.777163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.786959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.787005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.787023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.787032] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.787041] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.797238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.807092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.807138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.807156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.807166] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.807174] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.817236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.827015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.827058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.827076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.827085] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.827094] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.837378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.847049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.847090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.847108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.847117] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.847126] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.176 [2024-11-03 15:52:13.857381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.176 qpair failed and we were unable to recover it. 00:33:36.176 [2024-11-03 15:52:13.867189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.176 [2024-11-03 15:52:13.867231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.176 [2024-11-03 15:52:13.867249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.176 [2024-11-03 15:52:13.867258] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.176 [2024-11-03 15:52:13.867267] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.177 [2024-11-03 15:52:13.877422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.177 qpair failed and we were unable to recover it. 00:33:36.177 [2024-11-03 15:52:13.887240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.177 [2024-11-03 15:52:13.887284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.177 [2024-11-03 15:52:13.887308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.177 [2024-11-03 15:52:13.887317] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.177 [2024-11-03 15:52:13.887327] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.177 [2024-11-03 15:52:13.897590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.177 qpair failed and we were unable to recover it. 00:33:36.177 [2024-11-03 15:52:13.907131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.177 [2024-11-03 15:52:13.907168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.177 [2024-11-03 15:52:13.907187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.177 [2024-11-03 15:52:13.907196] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.177 [2024-11-03 15:52:13.907205] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.177 [2024-11-03 15:52:13.917517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.177 qpair failed and we were unable to recover it. 00:33:36.177 [2024-11-03 15:52:13.927360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.177 [2024-11-03 15:52:13.927398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.177 [2024-11-03 15:52:13.927416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.177 [2024-11-03 15:52:13.927426] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.177 [2024-11-03 15:52:13.927434] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.177 [2024-11-03 15:52:13.937604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.177 qpair failed and we were unable to recover it. 00:33:36.177 [2024-11-03 15:52:13.947399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.177 [2024-11-03 15:52:13.947441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.177 [2024-11-03 15:52:13.947459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.177 [2024-11-03 15:52:13.947469] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.177 [2024-11-03 15:52:13.947479] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.177 [2024-11-03 15:52:13.957556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.177 qpair failed and we were unable to recover it. 00:33:36.434 [2024-11-03 15:52:13.967402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.434 [2024-11-03 15:52:13.967448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.434 [2024-11-03 15:52:13.967465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.434 [2024-11-03 15:52:13.967475] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.434 [2024-11-03 15:52:13.967488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.434 [2024-11-03 15:52:13.977777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.434 qpair failed and we were unable to recover it. 00:33:36.434 [2024-11-03 15:52:13.987311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.434 [2024-11-03 15:52:13.987353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.435 [2024-11-03 15:52:13.987371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.435 [2024-11-03 15:52:13.987381] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.435 [2024-11-03 15:52:13.987390] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.435 [2024-11-03 15:52:13.997838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.435 qpair failed and we were unable to recover it. 00:33:36.435 [2024-11-03 15:52:14.007516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.435 [2024-11-03 15:52:14.007559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.435 [2024-11-03 15:52:14.007578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.435 [2024-11-03 15:52:14.007588] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.435 [2024-11-03 15:52:14.007597] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.435 [2024-11-03 15:52:14.017795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.435 qpair failed and we were unable to recover it. 00:33:36.435 [2024-11-03 15:52:14.027636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.435 [2024-11-03 15:52:14.027677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.435 [2024-11-03 15:52:14.027695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.435 [2024-11-03 15:52:14.027705] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.435 [2024-11-03 15:52:14.027713] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:33:36.435 [2024-11-03 15:52:14.038000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:36.435 qpair failed and we were unable to recover it. 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Write completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 Read completed with error (sct=0, sc=8) 00:33:37.365 starting I/O failed 00:33:37.365 [2024-11-03 15:52:15.043155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:37.365 [2024-11-03 15:52:15.050519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.365 [2024-11-03 15:52:15.050564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.365 [2024-11-03 15:52:15.050584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.365 [2024-11-03 15:52:15.050594] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.365 [2024-11-03 15:52:15.050603] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:33:37.365 [2024-11-03 15:52:15.060934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:37.365 qpair failed and we were unable to recover it. 00:33:37.365 [2024-11-03 15:52:15.070682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.365 [2024-11-03 15:52:15.070726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.365 [2024-11-03 15:52:15.070744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.365 [2024-11-03 15:52:15.070754] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.365 [2024-11-03 15:52:15.070765] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:33:37.365 [2024-11-03 15:52:15.081033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:37.365 qpair failed and we were unable to recover it. 00:33:37.365 [2024-11-03 15:52:15.090779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.365 [2024-11-03 15:52:15.090821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.365 [2024-11-03 15:52:15.090842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.365 [2024-11-03 15:52:15.090852] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.365 [2024-11-03 15:52:15.090862] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:33:37.365 [2024-11-03 15:52:15.101116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:37.365 qpair failed and we were unable to recover it. 00:33:37.365 [2024-11-03 15:52:15.110766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.365 [2024-11-03 15:52:15.110810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.365 [2024-11-03 15:52:15.110829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.365 [2024-11-03 15:52:15.110838] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.365 [2024-11-03 15:52:15.110847] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:33:37.366 [2024-11-03 15:52:15.121073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:37.366 qpair failed and we were unable to recover it. 00:33:37.366 [2024-11-03 15:52:15.121204] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:37.366 A controller has encountered a failure and is being reset. 00:33:37.366 [2024-11-03 15:52:15.130867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.366 [2024-11-03 15:52:15.130912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.366 [2024-11-03 15:52:15.130938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.366 [2024-11-03 15:52:15.130952] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.366 [2024-11-03 15:52:15.130964] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:33:37.366 [2024-11-03 15:52:15.141138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.366 qpair failed and we were unable to recover it. 00:33:37.366 [2024-11-03 15:52:15.151014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.366 [2024-11-03 15:52:15.151055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.366 [2024-11-03 15:52:15.151076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.366 [2024-11-03 15:52:15.151086] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.366 [2024-11-03 15:52:15.151095] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:33:37.623 [2024-11-03 15:52:15.161305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.623 qpair failed and we were unable to recover it. 00:33:37.623 [2024-11-03 15:52:15.161436] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:37.623 [2024-11-03 15:52:15.195108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:33:37.623 Controller properly reset. 00:33:37.623 Initializing NVMe Controllers 00:33:37.623 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.623 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.623 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:37.623 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:37.623 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:37.623 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:37.623 Initialization complete. Launching workers. 00:33:37.623 Starting thread on core 1 00:33:37.623 Starting thread on core 2 00:33:37.623 Starting thread on core 3 00:33:37.623 Starting thread on core 0 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:37.623 00:33:37.623 real 0m11.997s 00:33:37.623 user 0m24.804s 00:33:37.623 sys 0m2.960s 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.623 ************************************ 00:33:37.623 END TEST nvmf_target_disconnect_tc2 00:33:37.623 ************************************ 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.623 ************************************ 00:33:37.623 START TEST nvmf_target_disconnect_tc3 00:33:37.623 ************************************ 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc3 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2482437 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:33:37.623 15:52:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:33:40.147 15:52:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2481297 00:33:40.147 15:52:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Write completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 Read completed with error (sct=0, sc=8) 00:33:41.092 starting I/O failed 00:33:41.092 [2024-11-03 15:52:18.537630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:33:41.666 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2481297 Killed "${NVMF_APP[@]}" "$@" 00:33:41.666 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:33:41.666 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2483207 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2483207 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2483207 ']' 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:41.667 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:41.667 [2024-11-03 15:52:19.402899] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:41.667 [2024-11-03 15:52:19.402954] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.924 [2024-11-03 15:52:19.498585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:41.924 [2024-11-03 15:52:19.520123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.924 [2024-11-03 15:52:19.520166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.924 [2024-11-03 15:52:19.520176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.924 [2024-11-03 15:52:19.520186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.924 [2024-11-03 15:52:19.520193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.924 [2024-11-03 15:52:19.522025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:41.924 [2024-11-03 15:52:19.522132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:41.924 [2024-11-03 15:52:19.522239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:41.924 [2024-11-03 15:52:19.522241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Write completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 Read completed with error (sct=0, sc=8) 00:33:41.924 starting I/O failed 00:33:41.924 [2024-11-03 15:52:19.542712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@866 -- # return 0 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:41.924 Malloc0 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.924 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:42.182 [2024-11-03 15:52:19.734221] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x958640/0x9642d0) succeed. 00:33:42.182 [2024-11-03 15:52:19.743739] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x959c80/0x9a5970) succeed. 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:42.182 [2024-11-03 15:52:19.892587] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.182 15:52:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2482437 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Read completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 Write completed with error (sct=0, sc=8) 00:33:43.113 starting I/O failed 00:33:43.113 [2024-11-03 15:52:20.547809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Write completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.045 Read completed with error (sct=0, sc=8) 00:33:44.045 starting I/O failed 00:33:44.046 Write completed with error (sct=0, sc=8) 00:33:44.046 starting I/O failed 00:33:44.046 Read completed with error (sct=0, sc=8) 00:33:44.046 starting I/O failed 00:33:44.046 [2024-11-03 15:52:21.552771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:33:44.046 [2024-11-03 15:52:21.552799] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:33:44.046 A controller has encountered a failure and is being reset. 00:33:44.046 Resorting to new failover address 192.168.100.9 00:33:44.046 [2024-11-03 15:52:21.552902] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:44.046 [2024-11-03 15:52:21.552991] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:44.046 [2024-11-03 15:52:21.584518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:44.046 Controller properly reset. 00:33:48.228 Initializing NVMe Controllers 00:33:48.228 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.228 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:48.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:48.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:48.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:48.228 Initialization complete. Launching workers. 00:33:48.228 Starting thread on core 1 00:33:48.228 Starting thread on core 2 00:33:48.228 Starting thread on core 3 00:33:48.228 Starting thread on core 0 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:33:48.228 00:33:48.228 real 0m10.294s 00:33:48.228 user 1m3.132s 00:33:48.228 sys 0m1.822s 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:48.228 ************************************ 00:33:48.228 END TEST nvmf_target_disconnect_tc3 00:33:48.228 ************************************ 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:48.228 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:48.229 rmmod nvme_rdma 00:33:48.229 rmmod nvme_fabrics 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2483207 ']' 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2483207 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2483207 ']' 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2483207 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2483207 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2483207' 00:33:48.229 killing process with pid 2483207 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2483207 00:33:48.229 15:52:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2483207 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:48.488 00:33:48.488 real 0m30.929s 00:33:48.488 user 1m56.488s 00:33:48.488 sys 0m10.637s 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:48.488 ************************************ 00:33:48.488 END TEST nvmf_target_disconnect 00:33:48.488 ************************************ 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:48.488 00:33:48.488 real 7m7.974s 00:33:48.488 user 20m13.420s 00:33:48.488 sys 1m38.488s 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:48.488 15:52:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.488 ************************************ 00:33:48.488 END TEST nvmf_host 00:33:48.488 ************************************ 00:33:48.488 15:52:26 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:33:48.488 00:33:48.488 real 26m34.905s 00:33:48.488 user 78m11.854s 00:33:48.488 sys 6m23.957s 00:33:48.488 15:52:26 nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:48.488 15:52:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:48.488 ************************************ 00:33:48.488 END TEST nvmf_rdma 00:33:48.488 ************************************ 00:33:48.488 15:52:26 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:48.488 15:52:26 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:48.488 15:52:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:48.488 15:52:26 -- common/autotest_common.sh@10 -- # set +x 00:33:48.488 ************************************ 00:33:48.488 START TEST spdkcli_nvmf_rdma 00:33:48.488 ************************************ 00:33:48.488 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:48.747 * Looking for test storage... 00:33:48.747 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:48.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.747 --rc genhtml_branch_coverage=1 00:33:48.747 --rc genhtml_function_coverage=1 00:33:48.747 --rc genhtml_legend=1 00:33:48.747 --rc geninfo_all_blocks=1 00:33:48.747 --rc geninfo_unexecuted_blocks=1 00:33:48.747 00:33:48.747 ' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:48.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.747 --rc genhtml_branch_coverage=1 00:33:48.747 --rc genhtml_function_coverage=1 00:33:48.747 --rc genhtml_legend=1 00:33:48.747 --rc geninfo_all_blocks=1 00:33:48.747 --rc geninfo_unexecuted_blocks=1 00:33:48.747 00:33:48.747 ' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:48.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.747 --rc genhtml_branch_coverage=1 00:33:48.747 --rc genhtml_function_coverage=1 00:33:48.747 --rc genhtml_legend=1 00:33:48.747 --rc geninfo_all_blocks=1 00:33:48.747 --rc geninfo_unexecuted_blocks=1 00:33:48.747 00:33:48.747 ' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:48.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.747 --rc genhtml_branch_coverage=1 00:33:48.747 --rc genhtml_function_coverage=1 00:33:48.747 --rc genhtml_legend=1 00:33:48.747 --rc geninfo_all_blocks=1 00:33:48.747 --rc geninfo_unexecuted_blocks=1 00:33:48.747 00:33:48.747 ' 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.747 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:48.748 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2484406 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2484406 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # '[' -z 2484406 ']' 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:48.748 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:48.748 [2024-11-03 15:52:26.473182] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:33:48.748 [2024-11-03 15:52:26.473236] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484406 ] 00:33:49.026 [2024-11-03 15:52:26.550531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:49.026 [2024-11-03 15:52:26.573963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.026 [2024-11-03 15:52:26.573965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@866 -- # return 0 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.026 15:52:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:57.142 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:57.142 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:57.142 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:57.142 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.142 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:57.143 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:57.143 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:57.143 altname enp217s0f0np0 00:33:57.143 altname ens818f0np0 00:33:57.143 inet 192.168.100.8/24 scope global mlx_0_0 00:33:57.143 valid_lft forever preferred_lft forever 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:57.143 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:57.143 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:57.143 altname enp217s0f1np1 00:33:57.143 altname ens818f1np1 00:33:57.143 inet 192.168.100.9/24 scope global mlx_0_1 00:33:57.143 valid_lft forever preferred_lft forever 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:57.143 192.168.100.9' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:57.143 192.168.100.9' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:57.143 192.168.100.9' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:57.143 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:57.144 15:52:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:57.144 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:57.144 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:57.144 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:57.144 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:57.144 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:57.144 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:57.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:57.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:57.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:57.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:57.144 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:57.144 ' 00:33:59.046 [2024-11-03 15:52:36.434065] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d7840/0x12c1ec0) succeed. 00:33:59.046 [2024-11-03 15:52:36.443747] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d2a00/0x1303560) succeed. 00:34:00.422 [2024-11-03 15:52:37.841456] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:34:02.954 [2024-11-03 15:52:40.329256] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:34:04.855 [2024-11-03 15:52:42.496320] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:06.758 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:06.758 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:06.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:06.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:06.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:06.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:34:06.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:34:06.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:06.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:06.759 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:06.759 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:34:06.759 15:52:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:07.018 15:52:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:07.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:07.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:07.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:07.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:34:07.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:34:07.018 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:07.018 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:07.018 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:07.018 ' 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:34:12.288 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:34:12.288 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:12.288 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:12.288 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2484406 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' -z 2484406 ']' 00:34:12.288 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # kill -0 2484406 00:34:12.289 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # uname 00:34:12.289 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:12.289 15:52:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484406 00:34:12.289 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:12.289 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:12.289 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484406' 00:34:12.289 killing process with pid 2484406 00:34:12.289 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@971 -- # kill 2484406 00:34:12.289 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@976 -- # wait 2484406 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:12.548 rmmod nvme_rdma 00:34:12.548 rmmod nvme_fabrics 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:12.548 00:34:12.548 real 0m24.088s 00:34:12.548 user 0m52.606s 00:34:12.548 sys 0m6.285s 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:12.548 15:52:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:12.548 ************************************ 00:34:12.548 END TEST spdkcli_nvmf_rdma 00:34:12.548 ************************************ 00:34:12.548 15:52:50 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:12.548 15:52:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:12.548 15:52:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:12.548 15:52:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:12.548 15:52:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:12.548 15:52:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:12.548 15:52:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:12.548 15:52:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:12.548 15:52:50 -- common/autotest_common.sh@10 -- # set +x 00:34:12.806 15:52:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:12.806 15:52:50 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:34:12.807 15:52:50 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:34:12.807 15:52:50 -- common/autotest_common.sh@10 -- # set +x 00:34:19.377 INFO: APP EXITING 00:34:19.377 INFO: killing all VMs 00:34:19.377 INFO: killing vhost app 00:34:19.377 INFO: EXIT DONE 00:34:21.909 Waiting for block devices as requested 00:34:22.167 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:22.167 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:22.167 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:22.426 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:22.426 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:22.426 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:22.685 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:22.685 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:22.685 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:22.685 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:22.944 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:22.944 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:22.944 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:23.202 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:23.202 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:23.202 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:23.501 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:26.811 Cleaning 00:34:26.811 Removing: /var/run/dpdk/spdk0/config 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:26.811 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:26.811 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:26.811 Removing: /var/run/dpdk/spdk1/config 00:34:26.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:26.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:26.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:26.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:26.811 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:26.812 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:26.812 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:26.812 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:26.812 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:26.812 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:26.812 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:26.812 Removing: /var/run/dpdk/spdk2/config 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:26.812 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:26.812 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:26.812 Removing: /var/run/dpdk/spdk3/config 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:26.812 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:26.812 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:26.812 Removing: /var/run/dpdk/spdk4/config 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:26.812 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:26.812 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:26.812 Removing: /dev/shm/bdevperf_trace.pid2132284 00:34:26.812 Removing: /dev/shm/bdev_svc_trace.1 00:34:26.812 Removing: /dev/shm/nvmf_trace.0 00:34:26.812 Removing: /dev/shm/spdk_tgt_trace.pid2088269 00:34:26.812 Removing: /var/run/dpdk/spdk0 00:34:26.812 Removing: /var/run/dpdk/spdk1 00:34:26.812 Removing: /var/run/dpdk/spdk2 00:34:26.812 Removing: /var/run/dpdk/spdk3 00:34:26.812 Removing: /var/run/dpdk/spdk4 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2085711 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2086971 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2088269 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2088910 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2089837 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2090024 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2091129 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2091138 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2091501 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2096551 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2098093 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2098418 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2098742 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2098961 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2099161 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2099450 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2099730 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2100052 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2100649 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2103808 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2104102 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2104272 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2104402 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2104804 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2104966 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2105422 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2105541 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2105837 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2105847 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2106126 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2106158 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2106638 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2106824 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2107161 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2111299 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2115566 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2126016 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2126976 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2132284 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2132534 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2136573 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2142326 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2145064 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2155060 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2179696 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2184278 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2279281 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2284423 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2290120 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2298953 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2330155 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2335419 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2377047 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2377952 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2379360 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2380472 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2385371 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2392279 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2393272 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2394137 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2394977 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2395469 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2399736 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2399816 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2404255 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2404788 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2405324 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2406112 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2406119 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2408527 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2410388 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2412243 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2414130 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2415983 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2417890 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2424557 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2425138 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2427420 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2428622 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2435632 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2438290 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2443694 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2453795 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2453801 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2474080 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2474352 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2480198 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2480512 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2482437 00:34:26.812 Removing: /var/run/dpdk/spdk_pid2484406 00:34:26.812 Clean 00:34:27.070 15:53:04 -- common/autotest_common.sh@1451 -- # return 0 00:34:27.070 15:53:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:27.070 15:53:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.070 15:53:04 -- common/autotest_common.sh@10 -- # set +x 00:34:27.070 15:53:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:27.070 15:53:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.070 15:53:04 -- common/autotest_common.sh@10 -- # set +x 00:34:27.070 15:53:04 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:27.070 15:53:04 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:34:27.070 15:53:04 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:34:27.070 15:53:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:27.070 15:53:04 -- spdk/autotest.sh@394 -- # hostname 00:34:27.070 15:53:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:34:27.328 geninfo: WARNING: invalid characters removed from testname! 00:34:49.250 15:53:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:49.509 15:53:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:51.411 15:53:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:52.789 15:53:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:54.694 15:53:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:56.073 15:53:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:57.977 15:53:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:57.977 15:53:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:57.977 15:53:35 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:34:57.977 15:53:35 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:57.977 15:53:35 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:57.977 15:53:35 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:57.977 + [[ -n 1990966 ]] 00:34:57.977 + sudo kill 1990966 00:34:57.987 [Pipeline] } 00:34:58.003 [Pipeline] // stage 00:34:58.008 [Pipeline] } 00:34:58.022 [Pipeline] // timeout 00:34:58.027 [Pipeline] } 00:34:58.041 [Pipeline] // catchError 00:34:58.046 [Pipeline] } 00:34:58.060 [Pipeline] // wrap 00:34:58.066 [Pipeline] } 00:34:58.079 [Pipeline] // catchError 00:34:58.089 [Pipeline] stage 00:34:58.091 [Pipeline] { (Epilogue) 00:34:58.104 [Pipeline] catchError 00:34:58.106 [Pipeline] { 00:34:58.120 [Pipeline] echo 00:34:58.122 Cleanup processes 00:34:58.129 [Pipeline] sh 00:34:58.411 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:58.411 2503617 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:58.426 [Pipeline] sh 00:34:58.707 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:58.707 ++ grep -v 'sudo pgrep' 00:34:58.707 ++ awk '{print $1}' 00:34:58.707 + sudo kill -9 00:34:58.707 + true 00:34:58.720 [Pipeline] sh 00:34:59.003 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:59.003 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:35:05.560 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:35:08.850 [Pipeline] sh 00:35:09.165 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:09.165 Artifacts sizes are good 00:35:09.222 [Pipeline] archiveArtifacts 00:35:09.229 Archiving artifacts 00:35:09.358 [Pipeline] sh 00:35:09.638 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:35:09.651 [Pipeline] cleanWs 00:35:09.660 [WS-CLEANUP] Deleting project workspace... 00:35:09.660 [WS-CLEANUP] Deferred wipeout is used... 00:35:09.666 [WS-CLEANUP] done 00:35:09.667 [Pipeline] } 00:35:09.678 [Pipeline] // catchError 00:35:09.686 [Pipeline] sh 00:35:09.958 + logger -p user.info -t JENKINS-CI 00:35:09.966 [Pipeline] } 00:35:09.979 [Pipeline] // stage 00:35:09.983 [Pipeline] } 00:35:09.991 [Pipeline] // node 00:35:09.994 [Pipeline] End of Pipeline 00:35:10.021 Finished: SUCCESS